Search results for: pan-tilt application
788 Carbon Capture and Storage Using Porous-Based Aerogel Materials
Authors: Rima Alfaraj, Abeer Alarawi, Murtadha AlTammar
Abstract:
The global energy landscape heavily relies on the oil and gas industry, which faces the critical challenge of reducing its carbon footprint. To address this issue, the integration of advanced materials like aerogels has emerged as a promising solution to enhance sustainability and environmental performance within the industry. This study thoroughly examines the application of aerogel-based technologies in the oil and gas sector, focusing particularly on their role in carbon capture and storage (CCS) initiatives. Aerogels, known for their exceptional properties, such as high surface area, low density, and customizable pore structure, have garnered attention for their potential in various CCS strategies. The review delves into various fabrication techniques utilized in producing aerogel materials, including sol-gel, supercritical drying, and freeze-drying methods, to assess their suitability for specific industry applications. Beyond fabrication, the practicality of aerogel materials in critical areas such as flow assurance, enhanced oil recovery, and thermal insulation is explored. The analysis spans a wide range of applications, from potential use in pipelines and equipment to subsea installations, offering valuable insights into the real-world implementation of aerogels in the oil and gas sector. The paper also investigates the adsorption and storage capabilities of aerogel-based sorbents, showcasing their effectiveness in capturing and storing carbon dioxide (CO₂) molecules. Optimization of pore size distribution and surface chemistry is examined to enhance the affinity and selectivity of aerogels towards CO₂, thereby improving the efficiency and capacity of CCS systems. Additionally, the study explores the potential of aerogel-based membranes for separating and purifying CO₂ from oil and gas streams, emphasizing their role in the carbon capture and utilization (CCU) value chain in the industry. Emerging trends and future perspectives in integrating aerogel-based technologies within the oil and gas sector are also discussed, including the development of hybrid aerogel composites and advanced functional components to further enhance material performance and versatility. By synthesizing the latest advancements and future directions in aerogel used for CCS applications in the oil and gas industry, this review offers a comprehensive understanding of how these innovative materials can aid in transitioning towards a more sustainable and environmentally conscious energy landscape. The insights provided can assist in strategic decision-making, drive technology development, and foster collaborations among academia, industry, and policymakers to promote the widespread adoption of aerogel-based solutions in the oil and gas sector.Keywords: CCS, porous, carbon capture, oil and gas, sustainability
Procedia PDF Downloads 42787 Development and Characterization of Novel Topical Formulation Containing Niacinamide
Authors: Sevdenur Onger, Ali Asram Sagiroglu
Abstract:
Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.Keywords: delivery systems, hyperpigmentation, liposome, niacinamide
Procedia PDF Downloads 112786 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery
Procedia PDF Downloads 172785 Combining Nitrocarburisation and Dry Lubrication for Improving Component Lifetime
Authors: Kaushik Vaideeswaran, Jean Gobet, Patrick Margraf, Olha Sereda
Abstract:
Nitrocarburisation is a surface hardening technique often applied to improve the wear resistance of steel surfaces. It is considered to be a promising solution in comparison with other processes such as flame spraying, owing to the formation of a diffusion layer which provides mechanical integrity, as well as its cost-effectiveness. To improve other tribological properties of the surface such as the coefficient of friction (COF), dry lubricants are utilized. Currently, the lifetime of steel components in many applications using either of these techniques individually are faced with the limitations of the two: high COF for nitrocarburized surfaces and low wear resistance of dry lubricant coatings. To this end, the current study involves the creation of a hybrid surface using the impregnation of a dry lubricant on to a nitrocarburized surface. The mechanical strength and hardness of Gerster SA’s nitrocarburized surfaces accompanied by the impregnation of the porous outermost layer with a solid lubricant will create a hybrid surface possessing both outstanding wear resistance and a low friction coefficient and with high adherence to the substrate. Gerster SA has the state-of-the-art technology for the surface hardening of various steels. Through their expertise in the field, the nitrocarburizing process parameters (atmosphere, temperature, dwelling time) were optimized to obtain samples that have a distinct porous structure (in terms of size, shape, and density) as observed by metallographic and microscopic analyses. The porosity thus obtained is suitable for the impregnation of a dry lubricant. A commercially available dry lubricant with a thermoplastic matrix was employed for the impregnation process, which was optimized to obtain a void-free interface with the surface of the nitrocarburized layer (henceforth called hybrid surface). In parallel, metallic samples without nitrocarburisation were also impregnated with the same dry lubricant as a reference (henceforth called reference surface). The reference and the nitrocarburized surfaces, with and without the dry lubricant were tested for their tribological behavior by sliding against a quenched steel ball using a nanotribometer. Without any lubricant, the nitrocarburized surface showed a wear rate 5x lower than the reference metal. In the presence of a thin film of dry lubricant ( < 2 micrometers) and under the application of high loads (500 mN or ~800 MPa), while the COF for the reference surface increased from ~0.1 to > 0.3 within 120 m, the hybrid surface retained a COF < 0.2 for over 400m of sliding. In addition, while the steel ball sliding against the reference surface showed heavy wear, the corresponding ball sliding against the hybrid surface showed very limited wear. Observations of the sliding tracks in the hybrid surface using Electron Microscopy show the presence of the nitrocarburized nodules as well as the lubricant, whereas no traces of the lubricant were found in the sliding track on the reference surface. In this manner, the clear advantage of combining nitrocarburisation with the impregnation of a dry lubricant towards forming a hybrid surface has been demonstrated.Keywords: dry lubrication, hybrid surfaces, improved wear resistance, nitrocarburisation, steels
Procedia PDF Downloads 122784 Strategies for Incorporating Intercultural Intelligence into Higher Education
Authors: Hyoshin Kim
Abstract:
Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education
Procedia PDF Downloads 211783 Comparison between Conventional Bacterial and Algal-Bacterial Aerobic Granular Sludge Systems in the Treatment of Saline Wastewater
Authors: Philip Semaha, Zhongfang Lei, Ziwen Zhao, Sen Liu, Zhenya Zhang, Kazuya Shimizu
Abstract:
The increasing generation of saline wastewater through various industrial activities is becoming a global concern for activated sludge (AS) based biological treatment which is widely applied in wastewater treatment plants (WWTPs). As for the AS process, an increase in wastewater salinity has negative impact on its overall performance. The advent of conventional aerobic granular sludge (AGS) or bacterial AGS biotechnology has gained much attention because of its superior performance. The development of algal-bacterial AGS could enhance better nutrients removal, potentially reduce aeration cost through symbiotic algae-bacterial activity, and thus, can also reduce overall treatment cost. Nonetheless, the potential of salt stress to decrease biomass growth, microbial activity and nutrient removal exist. Up to the present, little information is available on saline wastewater treatment by algal-bacterial AGS. To the authors’ best knowledge, a comparison of the two AGS systems has not been done to evaluate nutrients removal capacity in the context of salinity increase. This study sought to figure out the impact of salinity on the algal-bacterial AGS system in comparison to bacterial AGS one, contributing to the application of AGS technology in the real world of saline wastewater treatment. In this study, the salt concentrations tested were 0 g/L, 1 g/L, 5 g/L, 10 g/L and 15 g/L of NaCl with 24-hr artificial illuminance of approximately 97.2 µmol m¯²s¯¹, and mature bacterial and algal-bacterial AGS were used for the operation of two identical sequencing batch reactors (SBRs) with a working volume of 0.9 L each, respectively. The results showed that salinity increase caused no apparent change in the color of bacterial AGS; while for algal-bacterial AGS, its color was progressively changed from green to dark green. A consequent increase in granule diameter and fluffiness was observed in the bacterial AGS reactor with the increase of salinity in comparison to a decrease in algal-bacterial AGS diameter. However, nitrite accumulation peaked from 1.0 mg/L and 0.4 mg/L at 1 g/L NaCl in the bacterial and algal-bacterial AGS systems, respectively to 9.8 mg/L in both systems when NaCl concentration varied from 5 g/L to 15 g/L. Almost no ammonia nitrogen was detected in the effluent except at 10 g/L NaCl concentration, where it averaged 4.2 mg/L and 2.4 mg/L, respectively, in the bacterial and algal-bacterial AGS systems. Nutrients removal in the algal-bacterial system was relatively higher than the bacterial AGS in terms of nitrogen and phosphorus removals. Nonetheless, the nutrient removal rate was almost 50% or lower. Results show that algal-bacterial AGS is more adaptable to salinity increase and could be more suitable for saline wastewater treatment. Optimization of operation conditions for algal-bacterial AGS system would be important to ensure its stably high efficiency in practice.Keywords: algal-bacterial aerobic granular sludge, bacterial aerobic granular sludge, Nutrients removal, saline wastewater, sequencing batch reactor
Procedia PDF Downloads 148782 Optimizing the Doses of Chitosan/Tripolyphosphate Loaded Nanoparticles of Clodinofop Propargyl and Fenoxaprop-P-Ethyl to Manage Avena Fatua L.: An Environmentally Safer Alternative to Control Weeds
Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Hussam F. Najeeb Alawadi, Athar Mahmood, Aneela Nijabat, Tasawer Abbas, Muhammad Habib, Abdullah
Abstract:
The global prevalence of Avena fatua infestation poses a significant challenge to wheat sustainability. While chemical control stands out as an efficient and rapid way to control weeds, concerns over developing resistance in weeds and environmental pollution have led to criticisms of herbicide use. Consequently, this study was designed to address these challenges through the chemical synthesis, characterization, and optimization of chitosan-based nanoparticles containing clodinofop Propargyl and fenoxaprop-P-ethyl for the effective management of A. fatua. Utilizing the ionic gelification technique, chitosan-based nanoparticles of clodinofop Propargyl and fenoxaprop-P-ethyl were prepared. These nanoparticles were applied at the 3-4 leaf stage of Phalaris minor weed, applying seven altered doses. These nanoparticles were applied at the 3-4 leaf stage of Phalaris minor weed, applying seven altered doses (D0 (Check weeds), D1 (Recommended dose of traditional-herbicide (TH), D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). Characterization of the chitosan-containing herbicide nanoparticles (CHT-NPs) was conducted using FT-IR analysis, demonstrating a perfect match with standard parameters. UV–visible spectrum further revealed absorption peaks at 310 nm for NPs of clodinofop propargyl and at 330 nm for NPs of fenoxaprop-p-ethyl. This research aims to contribute to sustainable weed management practices by addressing the challenges associated with chemical herbicide use. The application of chitosan-based nanoparticles (CHT-NPs) containing fenoxaprop-P-ethyl and clodinofop-propargyl at the recommended dose of the standard herbicide resulted in 100% mortality and visible injury to weeds. Surprisingly, when applied at a lower dose with 5-folds, these chitosan-containing nanoparticles of clodinofop Propargyl and fenoxaprop-P-ethyl demonstrated extreme control efficacy. Furthermore, at a 10-fold lower dose compared to standard herbicides and the recommended dose of clodinofop-propargyl and fenoxaprop-P-ethyl, the chitosan-based nanoparticles exhibited comparable effects on chlorophyll content, visual injury (%), mortality (%), plant height (cm), fresh weight (g), and dry weight (g) of A. fatua. This study indicates that chitosan/tripolyphosphate-loaded nanoparticles containing clodinofop-propargyl and fenoxaprop-P-ethyl can be effectively utilized for the management of A. fatua at a 10-fold lower dose, highlighting their potential for sustainable and efficient weed control.Keywords: mortality, chitosan-based nanoparticles, visual injury, chlorophyl contents, 5-fold lower dose.
Procedia PDF Downloads 56781 Cognitive Linguistic Features Underlying Spelling Development in a Second Language: A Case Study of L2 Spellers in South Africa
Authors: A. Van Staden, A. Tolmie, E. Vorster
Abstract:
Research confirms the multifaceted nature of spelling development and underscores the importance of both cognitive and linguistic skills that affect sound spelling development such as working and long-term memory, phonological and orthographic awareness, mental orthographic images, semantic knowledge and morphological awareness. This has clear implications for many South African English second language spellers (L2) who attempt to become proficient spellers. Since English has an opaque orthography, with irregular spelling patterns and insufficient sound/grapheme correspondences, L2 spellers can neither rely, nor draw on the phonological awareness skills of their first language (for example Sesotho and many other African languages), to assist them to spell the majority of English words. Epistemologically, this research is informed by social constructivism. In addition the researchers also hypothesized that the principles of the Overlapping Waves Theory was an appropriate lens through which to investigate whether L2 spellers could significantly improve their spelling skills via the implementation of an alternative route to spelling development, namely the orthographic route, and more specifically via the application of visual imagery. Post-test results confirmed the results of previous research that argues for the interactive nature of different cognitive and linguistic systems such as working memory and its subsystems and long-term memory, as learners were systematically guided to store visual orthographic images of words in their long-term lexicons. Moreover, the results have shown that L2 spellers in the experimental group (n = 9) significantly outperformed L2 spellers (n = 9) in the control group whose intervention involved phonological awareness (and coding) including the teaching of spelling rules. Consequently, L2 learners in the experimental group significantly improved in all the post-test measures included in this investigation, namely the four sub-tests of short-term memory; as well as two spelling measures (i.e. diagnostic and standardized measures). Against this background, the findings of this study look promising and have shown that, within a social-constructivist learning environment, learners can be systematically guided to apply higher-order thinking processes such as visual imagery to successfully store and retrieve mental images of spelling words from their output lexicons. Moreover, results from the present study could play an important role in directing research into this under-researched aspect of L2 literacy development within the South African education context.Keywords: English second language spellers, phonological and orthographic coding, social constructivism, visual imagery as spelling strategy
Procedia PDF Downloads 359780 Efficacy of Preimplantation Genetic Screening in Women with a Spontaneous Abortion History with Eukaryotic or Aneuploidy Abortus
Authors: Jayeon Kim, Eunjung Yu, Taeki Yoon
Abstract:
Most spontaneous miscarriage is believed to be a consequence of embryo aneuploidies. Transferring eukaryotic embryos selected by PGS is expected to decrease the miscarriage rate. Current PGS indications include advanced maternal age, recurrent pregnancy loss, repeated implantation failure. Recently, use of PGS for healthy women without above indications for the purpose of improving in vitro fertilization (IVF) outcomes is on the rise. However, it is still controversy about the beneficial effect of PGS in this population, especially, in women with a history of no more than 2 miscarriages or miscarriage of eukaryotic abortus. This study aimed to investigate if karyotyping result of abortus is a good indicator of preimplantation genetic screening (PGS) in subsequent IVF cycle in women with a history of spontaneous abortion. A single-center retrospective cohort study was performed. Women who had spontaneous abortion(s) (less than 3) and dilatation and evacuation, and subsequent IVF from January 2016 to November 2016 were included. Their medical information was extracted from the charts. Clinical pregnancy was defined as presence of a gestational sac with fetal heart beat detected on ultrasound in week 7. Statistical analysis was performed using SPSS software. Total 234 women were included. 121 out of 234 (51.7%) underwent karyotyping of the abortus, and 113 did not have the abortus karyotyped. Embryo biopsy was performed on 3 or 5 days after oocyte retrieval, followed by embryo transfer (ET) on a fresh or frozen cycle. The biopsied materials were subjected to microarray comparative genomic hybridization. Clinical pregnancy rate per ET was compared between PGS and non-PGS group in each study group. Patients were grouped by two criteria: karyotype of the abortus from previous miscarriage (unknown fetal karyotype (n=89, Group 1), eukaryotic abortus (n=36, Group 2) or aneuploidy abortus (n=67, Group 3)), and pursuing PGS in subsequent IVF cycle (pursuing PGS (PGS group, n=105) or not pursuing PGS (non-PGS group, n=87)). The PGS group was significantly older and had higher number of retrieved oocytes and prior miscarriages compared to non-PGS group. There were no differences in BMI and AMH level between those two groups. In PGS group, the mean number of transferable embryos (eukaryotic embryo) was 1.3 ± 0.7, 1.5 ± 0.5 and 1.4 ± 0.5, respectively (p = 0.049). In 42 cases, ET was cancelled because all embryos biopsied turned out to be abnormal. In all three groups (group 1, 2, and 3), clinical pregnancy rates were not statistically different between PGS and non-PGS group (Group 1: 48.8% vs. 52.2% (p=0.858), Group 2: 70% vs. 73.1% (p=0.730), Group 3: 42.3% vs. 46.7% (p=0.640), in PGS and non-PGS group, respectively). In both groups who had miscarriage with eukaryotic and aneuploidy abortus, the clinical pregnancy rate between IVF cycles with and without PGS was not different. When we compare miscarriage and ongoing pregnancy rate, there were no significant differences between PGS and non-PGS group in all three groups. Our results show that the routine application of PGS in women who had less than 3 miscarriages would not be beneficial, even in cases that previous miscarriage had been caused by fetal aneuploidy.Keywords: preimplantation genetic diagnosis, miscarriage, kpryotyping, in vitro fertilization
Procedia PDF Downloads 181779 Development of Social Competence in the Preparation and Continuing Training of Adult Educators
Authors: Genute Gedviliene, Vidmantas Tutlys
Abstract:
The aim of this paper is to reveal the deployment and development of the social competence in the higher education programmes of adult education and in the continuing training and competence development of the andragogues. There will be compared how the issues of cooperation and communication in the learning and teaching processes are treated in the study programmes and in the courses of continuing training of andragogues. Theoretical and empirical research methods were combined for research analysis. For the analysis the following methods were applied: 1) Literature and document analysis helped to highlight the communication and cooperation as fundamental phenomena of the social competence, it’s important for the adult education in the context of digitalization and globalization. There were also analyzed the research studies on the development of social competence in the field of andragogy, as well as on the place and weight of the social competence in the overall competence profile of the andragogue. 2) The empirical study is based on questionnaire survey method. The population of survey consists of 240 students of bachelor and master degree studies of andragogy in Lithuania and of 320 representatives of the different bodies and institutions involved in the continuing training and professional development of the adult educators in Lithuania. The themes of survey questionnaire were defined on the basis of findings of the literature review and included the following: 1) opinions of the respondents on the role and place of a social competence in the work of andragogue; 2) opinions of the respondents on the role and place of the development of social competence in the curricula of higher education studies and continuing training courses; 3) judgements on the implications of the higher education studies and courses of continuing training for the development of social competence and it’s deployment in the work of andragogue. Data analysis disclosed a wide range of ways and modalities of the deployment and development of social competence in the preparation and continuing training of the adult educators. Social competence is important for the students and adult education providers not only as the auxiliary capability for the communication and transfer of information, but also as the outcome of collective learning leading to the development of new capabilities applied by the learners in the learning process, their professional field of adult education and their social life. Equally so, social competence is necessary for the effective adult education activities not only as an auxiliary capacity applied in the teaching process, but also as a potential for improvement, development and sustainability of the didactic competence and know-how in this field. The students of the higher education programmes in the field of adult education treat social competence as important generic capacity important for the work of adult educator, whereas adult education providers discern the concrete issues of application of social competence in the different processes of adult education, starting from curriculum design and ending with assessment of learning outcomes.Keywords: adult education, andragogues, social competence, curriculum
Procedia PDF Downloads 142778 Sceletium Tortuosum: A review on its Phytochemistry, Pharmacokinetics, Biological and Clinical Activities
Authors: Tomi Lois Olatunji, Frances Siebert, Ademola Emmanuel Adetunji, Brian Harvey, Johane Gericke, Josias Hamman, Frank Van Der Kooy
Abstract:
Ethnopharmacological relevance: Sceletium tortuosum (L.) N.E.Br, the most sought after and widely researched species in the genus Sceletium is a succulent forb endemic to South Africa. Traditionally, this medicinal plant is mainly masticated or smoked and used for the relief of toothache, abdominal pain, and as a mood-elevator, analgesic, hypnotic, anxiolytic, thirst and hunger suppressant, and for its intoxicating/euphoric effects. Sceletium tortuosum is currently of widespread scientific interest due to its clinical potential in treating anxiety and depression, relieving stress in healthy individuals, and enhancing cognitive functions. These pharmacological actions are attributed to its phytochemical constituents referred to as mesembrine-type alkaloids. Aim of the review: The aim of this review was to comprehensively summarize and critically evaluate recent research advances on the phytochemistry, pharmacokinetics, biological and clinical activities of the medicinal plant S. tortuosum. Additionally, current ongoing research and future perspectives are also discussed. Methods: All relevant scientific articles, books, MSc and Ph.D. dissertations on botany, behavioral pharmacology, traditional uses, and phytochemistry of S. tortuosum were retrieved from different databases (including Science Direct, PubMed, Google Scholar, Scopus and Web of Science). For pharmacokinetics and pharmacological effects of S. tortuosum, the focus fell on relevant publications published between 2009 and 2021. Results: Twenty-five alkaloids belonging to four structural classes viz: mesembrine, Sceletium A4, joubertiamine, and tortuosamine, have been identified from S. tortuosum, of which the mesembrine class is predominant. The crude extracts and commercially available standardized extracts of S. tortuosum have displayed a wide spectrum of biological activities (e.g. antimalarial, anti-oxidant, immunomodulatory, anti-HIV, neuroprotection, enhancement of cognitive function) in in vitro or in vivo studies. This plant has not yet been studied in a clinical population, but has potential for enhancing cognitive function, and managing anxiety and depression. Conclusion: As an important South African medicinal plant, S. tortuosum has garnered many research advances on its phytochemistry and biological activities over the last decade. These scientific studies have shown that S. tortuosum has various bioactivities. The findings have further established the link between the phytochemistry and pharmacological application, and support the traditional use of S. tortuosum in the indigenous medicine of South Africa.Keywords: Aizoaceae, Mesembrine, Serotonin, Sceletium tortuosum, Zembrin®, psychoactive, antidepressant
Procedia PDF Downloads 215777 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows
Authors: D. M. Lewis, L. Frisby, U. Stead
Abstract:
Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations
Procedia PDF Downloads 101776 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 97775 Change of Substrate in Solid State Fermentation Can Produce Proteases and Phytases with Extremely Distinct Biochemical Characteristics and Promising Applications for Animal Nutrition
Authors: Paula K. Novelli, Margarida M. Barros, Luciana F. Flueri
Abstract:
Utilization of agricultural by-products, wheat ban and soybean bran, as substrate for solid state fermentation (SSF) was studied, aiming the achievement of different enzymes from Aspergillus sp. with distinct biological characteristics and its application and improvement on animal nutrition. Aspergillus niger and Aspergillus oryzea were studied as they showed very high yield of phytase and protease production, respectively. Phytase activity was measure using p-nitrophenilphosphate as substrate and a standard curve of p-nitrophenol, as the enzymatic activity unit was the quantity of enzyme necessary to release one μmol of p-nitrophenol. Protease activity was measure using azocasein as substrate. Activity for phytase and protease substantially increased when the different biochemical characteristics were considered in the study. Optimum pH and stability of the phytase produced by A. niger with wheat bran as substrate was between 4.0 - 5.0 and optimum temperature of activity was 37oC. Phytase fermented in soybean bran showed constant values at all pHs studied, for optimal and stability, but low production. Phytase with both substrates showed stable activity for temperatures higher than 80oC. Protease from A. niger showed very distinct behavior of optimum pH, acid for wheat bran and basic for soybean bran, respectively and optimal values of temperature and stability at 50oC. Phytase produced by A. oryzae in wheat bran had optimum pH and temperature of 9 and 37oC, respectively, but it was very unstable. On the other hand, proteases were stable at high temperatures, all pH’s studied and showed very high yield when fermented in wheat bran, however when it was fermented in soybean bran the production was very low. Subsequently the upscale production of phytase from A. niger and proteases from A. oryzae were applied as an enzyme additive in fish fed for digestibility studies. Phytases and proteases were produced with stable enzyme activity of 7,000 U.g-1 and 2,500 U.g-1, respectively. When those enzymes were applied in a plant protein based fish diet for digestibility studies, they increased protein, mineral, energy and lipids availability, showing that these new enzymes can improve animal production and performance. In conclusion, the substrate, as well as, the microorganism species can affect the biochemical character of the enzyme produced. Moreover, the production of these enzymes by SSF can be up to 90% cheaper than commercial ones produced with the same fungi species but submerged fermentation. Add to that these cheap enzymes can be easily applied as animal diet additives to improve production and performance.Keywords: agricultural by-products, animal nutrition, enzymes production, solid state fermentation
Procedia PDF Downloads 326774 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 73773 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 174772 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290771 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231770 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 332769 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 157768 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound
Procedia PDF Downloads 366767 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 234766 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery
Procedia PDF Downloads 268765 Initializing E-Classroom in a Multigrade School in the Philippines
Authors: Karl Erickson I. Ebora
Abstract:
Science and technology are two inseparable terms which bring wonders to all aspects of life such as education, medicine, food production and even the environment. In education, technology has become an integral part as it brings many benefits to the teaching-learning process. However, in the Philippines, being one of the developing countries resources are scarce and not all schools enjoy the fruits brought by technology. Much of this ordeal impacts that of multigrade instruction. These schools are often the last priority in resources allocation since these have limited number of students. In fact, it is not surprising that these schools do not have even a single computer unit much more a computer laboratory. This paper sought to present a plan on how public schools would receive its e-classroom. Specifically, this paper sought to answer questions like the level of the school readiness in terms of facilities and equipment; the attitude of the respondents towards the use of e-classroom; level of teacher’s familiarity in using different e-classroom software and the plans of interventions undertaken by the school to make it e-classroom ready. After gathering and analysing the necessary data, this paper came up with the following conclusions that in terms of facilities and equipment, Guisguis Talon Elementary School (Main), though a multigrade school, is ready to receive e-classroom.; that the respondents show positive disposition in technology utilization in teaching after they strongly agree that technology plays essential role in the teaching-learning process. Also, they strongly agree that technology is a good motivator; it makes the teaching and learning more interesting and effective; it makes teaching easy; and that technology enhances student’s learning. Additionally, Teacher-respondents in Guisguis Talon Elementary School (Main) show familiarity in using software. They are very familiar with MS Word; MS Excel; MS PowerPoint; and internet and email. Moreover, they are very familiar with basic e-classroom computer operations and basic application software. They are very familiar with MS office and can do simple editing and formatting; in accessing and saving information from CD/DVD, external hard drives, USB and the like; and in browsing effectively different search engines and educational sites, download and upload files. Likewise respondents strongly agree to the interventions undertaken by the school to make it e-classroom ready. They strongly agree that funding and support are needed by the school; that stakeholders should be encouraged to consider donating of equipment; and that school and community should try to mobilize their resources in order to help the school; that the teachers should be provided with trainings in order for them to be technologically competent; and that principals and administrators should motivate their teachers to undergo continuous professional development.Keywords: e-classroom, multi-grade school, DCP, classroom computers
Procedia PDF Downloads 200764 Inertial Particle Focusing Dynamics in Trapezoid Straight Microchannels: Application to Continuous Particle Filtration
Authors: Reza Moloudi, Steve Oh, Charles Chun Yang, Majid Ebrahimi Warkiani, May Win Naing
Abstract:
Inertial microfluidics has emerged recently as a promising tool for high-throughput manipulation of particles and cells for a wide range of flow cytometric tasks including cell separation/filtration, cell counting, and mechanical phenotyping. Inertial focusing is profoundly reliant on the cross-sectional shape of the channel and its impacts not only on the shear field but also the wall-effect lift force near the wall region. Despite comprehensive experiments and numerical analysis of the lift forces for rectangular and non-rectangular microchannels (half-circular and triangular cross-section), which all possess planes of symmetry, less effort has been made on the 'flow field structure' of trapezoidal straight microchannels and its effects on inertial focusing. On the other hand, a rectilinear channel with trapezoidal cross-sections breaks down all planes of symmetry. In this study, particle focusing dynamics inside trapezoid straight microchannels was first studied systematically for a broad range of channel Re number (20 < Re < 800). The altered axial velocity profile and consequently new shear force arrangement led to a cross-laterally movement of equilibration toward the longer side wall when the rectangular straight channel was changed to a trapezoid; however, the main lateral focusing started to move backward toward the middle and the shorter side wall, depending on particle clogging ratio (K=a/Hmin, a is particle size), channel aspect ratio (AR=W/Hmin, W is channel width, and Hmin is smaller channel height), and slope of slanted wall, as the channel Reynolds number further increased (Re > 50). Increasing the channel aspect ratio (AR) from 2 to 4 and the slope of slanted wall up to Tan(α)≈0.4 (Tan(α)=(Hlonger-sidewall-Hshorter-sidewall)/W) enhanced the off-center lateral focusing position from the middle of channel cross-section, up to ~20 percent of the channel width. It was found that the focusing point was spoiled near the slanted wall due to the dissymmetry; it mainly focused near the bottom wall or fluctuated between the channel center and the bottom wall, depending on the slanted wall and Re (Re < 100, channel aspect ratio 4:1). Eventually, as a proof of principle, a trapezoidal straight microchannel along with a bifurcation was designed and utilized for continuous filtration of a broader range of particle clogging ratio (0.3 < K < 1) exiting through the longer wall outlet with ~99% efficiency (Re < 100) in comparison to the rectangular straight microchannels (W > H, 0.3 ≤ K < 0.5).Keywords: cell/particle sorting, filtration, inertial microfluidics, straight microchannel, trapezoid
Procedia PDF Downloads 224763 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 121762 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites
Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan
Abstract:
All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite
Procedia PDF Downloads 100761 The Impact of Universal Design for Learning Implementation on Teaching Practices for Students with Intellectual Disabilities in the Kingdom of Saudi Arabia
Authors: Adnan Alhazmi
Abstract:
Background: UDL can be understood as a framework that holds the potential to elaborate the alternatives and platforms for the students with intellectual disabilities within general education settings and aims at offering flexible pathways that can support all the students in gaining a mastering over the goals of learning. This system of learning addresses the problem of the variability of the learner by delineating the diverse ways in which the individuals can understand, conceive, express and deal with the information. Goal: The aim of the proposed research is to examine the impact of the implementation of UDL in teaching practices for the students with intellectual disabilities in Saudi Arabian schools. Method: This research has used a combination of quantitative and qualitative designs. Survey questionnaires were used to gather the data for under this analytical descriptive method. The application of the qualitative interpretive approach was applied with the help of the interview to gather a detailed understanding on the aim of the research. For this purpose, the semi-structured interviews were conducted. Thus, the primary data will be gathered with the help of survey and interview to examine the impact of universal design learning implementation on teaching practices for intellectually disabled students in Saudi Arabian schools. The survey was conducted to examine the prevailing teaching practices for the students with intellectual disabilities in Saudi Arabia and evaluate if the teaching experience influences the current practices or not. The surveys were distributed to 50 teachers who teach the students with intellectual disabilities. However, the interviews were conducted to explore barriers of implementing UDL in Saudi Arabia and provide suggested guideline for the implementation of UDL in Saudi Arabia. The interviews, therefore, were with 10 teachers teaching the same subject. Findings: A key findings highlighted in this study revealed that the UDL framework serves as a crucial guide for teachers within inclusive settings to undertake meaningful planning for the individuals with intellectual disabilities so that they are able to access, participate, and grow within the general education curriculum. Other findings of the study highlighted the need to prepare the educators and all faculty members to understand the purpose and need for inclusion, the UDL framework so that better information about academic and social expectations for individuals with intellectual disabilities can be delivered. Conclusion: On the basis of the preliminary study undertaken on the subject of research, it could be suggested that UDL can serve to be an effective support for undertaking a meaningful inclusion of students with intellectual disability (ID) in general educational settings. It holds the potential role of working as an institutional design framework that could be used for designing curriculum for students with intellectual disabilities.Keywords: intellectual disability, inclusion, universal design for learning, teaching practice
Procedia PDF Downloads 139760 Photoswitchable and Polar-Dependent Fluorescence of Diarylethenes
Authors: Sofia Lazareva, Artem Smolentsev
Abstract:
Fluorescent photochromic materials collect strong interest due to their possible application in organic photonics such as optical logic systems, optical memory, visualizing sensors, as well as characterization of polymers and biological systems. In photochromic fluorescence switching systems the emission of fluorophore is modulated between ‘on’ and ‘off’ via the photoisomerization of photochromic moieties resulting in effective resonance energy transfer (FRET). In current work, we have studied both photochromic and fluorescent properties of several diarylethenes. It was found that coloured forms of these compounds are not fluorescent because of the efficient intramolecular energy transfer. Spectral and photochromic parameters of investigated substances have been measured in five solvents having different polarity. Quantum yields of photochromic transformation A↔B ΦA→B and ΦB→A as well as B isomer extinction coefficients were determined by kinetic method. It was found that the photocyclization reaction quantum yield of all compounds decreases with the increase of solvent polarity. In addition, the solvent polarity is revealed to affect fluorescence significantly. Increasing of the solvent dielectric constant was found to result in a strong shift of emission band position from 450 nm (nhexane) to 550 nm (DMSO and ethanol) for all three compounds. Moreover, the emission intensive in polar solvents becomes weak and hardly detectable in n-hexane. The only one exception in the described dependence is abnormally low fluorescence quantum yield in ethanol presumably caused by the loss of electron-donating properties of nitrogen atom due to the protonation. An effect of the protonation was also confirmed by the addition of concentrated HCl in solution resulting in a complete disappearance of the fluorescent band. Excited state dynamics were investigated by ultrafast optical spectroscopy methods. Kinetic curves of excited states absorption and fluorescence decays were measured. Lifetimes of transient states were calculated from the data measured. The mechanism of ring opening reaction was found to be polarity dependent. Comparative analysis of kinetics measured in acetonitrile and hexane reveals differences in relaxation dynamics after the laser pulse. The most important fact is the presence of two decay processes in acetonitrile, whereas only one is present in hexane. This fact supports an assumption made on the basis of steady-state preliminary experiments that in polar solvents occur stabilization of TICT state. Thus, results achieved prove the hypothesis of two channel mechanism of energy relaxation of compounds studied.Keywords: diarylethenes, fluorescence switching, FRET, photochromism, TICT state
Procedia PDF Downloads 679759 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 53