Search results for: systems administration
436 Nursing Education in the Pandemic Time: Case Study
Authors: Jaana Sepp, Ulvi Kõrgemaa, Kristi Puusepp, Õie Tähtla
Abstract:
COVID-19 was officially recognized as a pandemic in late 2019 by the WHO, and it has led to changes in the education sector. Educational institutions were closed, and most schools adopted distance learning. Estonia is known as a digitally well-developed country. Based on that, in the pandemic time, nursing education continued, and new technological solutions were implemented. To provide nursing education, special focus was paid on quality and flexibility. The aim of this paper is to present administrative, digital, and technological solutions which support Estonian nursing educators to continue the study process in the pandemic time and to develop a sustainable solution for nursing education for the future. This paper includes the authors’ analysis of the documents and decisions implemented in the institutions through the pandemic time. It is a case study of Estonian nursing educators. Results of the analysis show that the implementation of distance learning principles challenges the development of innovative strategies and technics for the assessment of student performance and educational outcomes and implement new strategies to encourage student engagement in the virtual classroom. Additionally, hospital internships were canceled, and the simulation approach was deeply implemented as a new opportunity to develop and assess students’ practical skills. There are many other technical and administrative changes that have also been carried out, such as students’ support and assessment systems, the designing and conducting of hybrid and blended studies, etc. All services were redesigned and made more available, individual, and flexible. Hence, the feedback system was changed, the information was collected in parallel with educational activities. Experiences of nursing education during the pandemic time are widely presented in scientific literature. However, to conclude our study, authors have found evidence that solutions implemented in Estonian nursing education allowed the students to graduate within the nominal study period without any decline in education quality. Operative information system and flexibility provided the minimum distance between the students, support, and academic staff, and likewise, the changes were implemented quickly and efficiently. Institution memberships were updated with the appropriate information, and it positively affected their satisfaction, motivation, and commitment. We recommend that the feedback process and the system should be permanently changed in the future to place all members in the same information area, redefine the hospital internship process, implement hybrid learning, as well as to improve the communication system between stakeholders inside and outside the organization. The main limitation of this study relates to the size of Estonia. Nursing education is provided by two institutions only, and similarly, the number of students is low. The result could be generated to the institutions with a similar size and administrative system. In the future, the relationship between nurses’ performance and organizational outcomes should be deeply investigated and influences of the pandemic time education analyzed at workplaces.Keywords: hybrid learning, nursing education, nursing, COVID-19
Procedia PDF Downloads 122435 Integrating Dynamic Energy Models and Life Cycle Assessment Tools: Overcoming Challenges and Unlocking Opportunities
Authors: Ali Badiei
Abstract:
The increasing urgency of climate change mitigation underscores the necessity for integrating advanced analytical frameworks that encompass both energy dynamics and environmental impacts. This study focuses on the convergence of Dynamic Energy Models (DEMs) and Life Cycle Assessment (LCA) tools, highlighting their combined potential to address the dual challenges of accurate energy system modelling and comprehensive sustainability evaluation. While DEMs excel in simulating time-dependent energy performance, LCAs provide insights into the cumulative environmental impacts over a product or system's lifecycle, including embodied and operational emissions. The integration of these methodologies is fraught with challenges. Discrepancies in data granularity, temporal resolutions, and system boundaries often lead to inconsistencies that hinder seamless interoperability. Furthermore, the computational complexity of merging time-sensitive energy simulations with lifecycle inventories demands innovative approaches to data harmonization and software compatibility. Despite these barriers, such integration offers substantial opportunities for enhancing the precision of sustainability assessments and informing evidence-based policy decisions. This paper examines the state of the art through a comprehensive review of existing frameworks and applications. UK case studies on energy-efficient buildings, particularly those adhering to Passivhaus standards, serve as focal points for evaluating the combined use of DEMs and LCA tools. The findings reveal that, while Passivhaus buildings significantly reduce operational energy consumption—meeting ultra-low energy targets—their embodied carbon emissions often offset initial gains. This underscores the importance of using integrated tools to optimize both operational and embodied carbon reduction strategies. Key outcomes of this research include the identification of gaps in current methodologies and the proposition of a unified framework to bridge these gaps. The study also highlights opportunities to utilize these integrated tools for policy formation and industrial practice innovation. By facilitating a lifecycle-focused understanding of energy systems, the integration of DEMs and LCAs can inform policies that incentivize sustainable construction practices and guide investments in low-carbon technologies. In conclusion, overcoming the technical and methodological challenges of linking DEMs and LCAs is critical for achieving holistic energy system optimization and supporting global net-zero carbon goals. This research advocates for multidisciplinary collaboration between energy modelers, environmental scientists, and policymakers to unlock the full potential of these tools in fostering sustainable development.Keywords: energy, modelling, life cycle assessment, dynamic
Procedia PDF Downloads 12434 Interplay of Material and Cycle Design in a Vacuum-Temperature Swing Adsorption Process for Biogas Upgrading
Authors: Federico Capra, Emanuele Martelli, Matteo Gazzani, Marco Mazzotti, Maurizio Notaro
Abstract:
Natural gas is a major energy source in the current global economy, contributing to roughly 21% of the total primary energy consumption. Production of natural gas starting from renewable energy sources is key to limit the related CO2 emissions, especially for those sectors that heavily rely on natural gas use. In this context, biomethane produced via biogas upgrading represents a good candidate for partial substitution of fossil natural gas. The upgrading process of biogas to biomethane consists in (i) the removal of pollutants and impurities (e.g. H2S, siloxanes, ammonia, water), and (ii) the separation of carbon dioxide from methane. Focusing on the CO2 removal process, several technologies can be considered: chemical or physical absorption with solvents (e.g. water, amines), membranes, adsorption-based systems (PSA). However, none emerged as the leading technology, because of (i) the heterogeneity in plant size, ii) the heterogeneity in biogas composition, which is strongly related to the feedstock type (animal manure, sewage treatment, landfill products), (iii) the case-sensitive optimal tradeoff between purity and recovery of biomethane, and iv) the destination of the produced biomethane (grid injection, CHP applications, transportation sector). With this contribution, we explore the use of a technology for biogas upgrading and we compare the resulting performance with benchmark technologies. The proposed technology makes use of a chemical sorbent, which is engineered by RSE and consists of Di-Ethanol-Amine deposited on a solid support made of γ-Alumina, to chemically adsorb the CO2 contained in the gas. The material is packed into fixed beds that cyclically undergo adsorption and regeneration steps. CO2 is adsorbed at low temperature and ambient pressure (or slightly above) while the regeneration is carried out by pulling vacuum and increasing the temperature of the bed (vacuum-temperature swing adsorption - VTSA). Dynamic adsorption tests were performed by RSE and were used to tune the mathematical model of the process, including material and transport parameters (i.e. Langmuir isotherms data and heat and mass transport). Based on this set of data, an optimal VTSA cycle was designed. The results enabled a better understanding of the interplay between material and cycle tuning. As exemplary application, the upgrading of biogas for grid injection, produced by an anaerobic digester (60-70% CO2, 30-40% CH4), for an equivalent size of 1 MWel was selected. A plant configuration is proposed to maximize heat recovery and minimize the energy consumption of the process. The resulting performances are very promising compared to benchmark solutions, which make the VTSA configuration a valuable alternative for biomethane production starting from biogas.Keywords: biogas upgrading, biogas upgrading energetic cost, CO2 adsorption, VTSA process modelling
Procedia PDF Downloads 281433 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method
Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili
Abstract:
The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method
Procedia PDF Downloads 201432 Flux-Gate vs. Anisotropic Magneto Resistance Magnetic Sensors Characteristics in Closed-Loop Operation
Authors: Neoclis Hadjigeorgiou, Spyridon Angelopoulos, Evangelos V. Hristoforou, Paul P. Sotiriadis
Abstract:
The increasing demand for accurate and reliable magnetic measurements over the past decades has paved the way for the development of different types of magnetic sensing systems as well as of more advanced measurement techniques. Anisotropic Magneto Resistance (AMR) sensors have emerged as a promising solution for applications requiring high resolution, providing an ideal balance between performance and cost. However, certain issues of AMR sensors such as non-linear response and measurement noise are rarely discussed in the relevant literature. In this work, an analog closed loop compensation system is proposed, developed and tested as a means to eliminate the non-linearity of AMR response, reduce the 1/f noise and enhance the sensitivity of magnetic sensor. Additional performance aspects, such as cross-axis and hysteresis effects are also examined. This system was analyzed using an analytical model and a P-Spice model, considering both the sensor itself as well as the accompanying electronic circuitry. In addition, a commercial closed loop architecture Flux-Gate sensor (calibrated and certified), has been used for comparison purposes. Three different experimental setups have been constructed for the purposes of this work, each one utilized for DC magnetic field measurements, AC magnetic field measurements and Noise density measurements respectively. The DC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to calibrate and characterize the system under consideration. A high-accuracy DC power supply has been used for providing the operating current to the Helmholtz coils. The results were recorded by a multichannel voltmeter The AC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to examine the effective bandwidth not only of the proposed system but also for the Flux-Gate sensor. A voltage controlled current source driven by a function generator has been utilized for the Helmholtz coil excitation. The result was observed by the oscilloscope. The third experimental apparatus incorporated an AC magnetic shielding construction composed of several layers of electric steel that had been demagnetized prior to the experimental process. Each sensor was placed alone and the response was captured by the oscilloscope. The preliminary experimental results indicate that closed loop AMR response presented a maximum deviation of 0.36% with respect to the ideal linear response, while the corresponding values for the open loop AMR system and the Fluxgate sensor reached 2% and 0.01% respectively. Moreover, the noise density of the proposed close loop AMR sensor system remained almost as low as the noise density of the AMR sensor itself, yet considerably higher than that of the Flux-Gate sensor. All relevant numerical data are presented in the paper.Keywords: AMR sensor, chopper, closed loop, electronic noise, magnetic noise, memory effects, flux-gate sensor, linearity improvement, sensitivity improvement
Procedia PDF Downloads 422431 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique
Authors: Harpal Singh, Sakshi Batra
Abstract:
The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.Keywords: discrete wavelet transform, robustness, video watermarking, watermark
Procedia PDF Downloads 226430 Rehabilitation of Orthotropic Steel Deck Bridges Using a Modified Ortho-Composite Deck System
Authors: Mozhdeh Shirinzadeh, Richard Stroetmann
Abstract:
Orthotropic steel deck bridge consists of a deck plate, longitudinal stiffeners under the deck plate, cross beams and the main longitudinal girders. Due to the several advantages, Orthotropic Steel Deck (OSD) systems have been utilized in many bridges worldwide. The significant feature of this structural system is its high load-bearing capacity while having relatively low dead weight. In addition, cost efficiency and the ability of rapid field erection have made the orthotropic steel deck a popular type of bridge worldwide. However, OSD bridges are highly susceptible to fatigue damage. A large number of welded joints can be regarded as the main weakness of this system. This problem is, in particular, evident in the bridges which were built before 1994 when the fatigue design criteria had not been introduced in the bridge design codes. Recently, an Orthotropic-composite slab (OCS) for road bridges has been experimentally and numerically evaluated and developed at Technische Universität Dresden as a part of AIF-FOSTA research project P1265. The results of the project have provided a solid foundation for the design and analysis of Orthotropic-composite decks with dowel strips as a durable alternative to conventional steel or reinforced concrete decks. In continuation, while using the achievements of that project, the application of a modified Ortho-composite deck for an existing typical OSD bridge is investigated. Composite action is obtained by using rows of dowel strips in a clothoid (CL) shape. Regarding Eurocode criteria for different fatigue detail categories of an OSD bridge, the effect of the proposed modification approach is assessed. Moreover, a numerical parametric study is carried out utilizing finite element software to determine the impact of different variables, such as the size and arrangement of dowel strips, the application of transverse or longitudinal rows of dowel strips, and local wheel loads. For the verification of the simulation technique, experimental results of a segment of an OCS deck are used conducted in project P1265. Fatigue assessment is performed based on the last draft of Eurocode 1993-2 (2024) for the most probable detail categories (Hot-Spots) that have been reported in the previous statistical studies. Then, an analytical comparison is provided between the typical orthotropic steel deck and the modified Ortho-composite deck bridge in terms of fatigue issues and durability. The load-bearing capacity of the bridge, the critical deflections, and the composite behavior are also evaluated and compared. Results give a comprehensive overview of the efficiency of the rehabilitation method considering the required design service life of the bridge. Moreover, the proposed approach is assessed with regard to the construction method, details and practical aspects, as well as the economic point of view.Keywords: composite action, fatigue, finite element method, steel deck, bridge
Procedia PDF Downloads 86429 Racial Distress in the Digital Age: A Mixed-Methods Exploration of the Effects of Social Media Exposure to Police Brutality on Black Students
Authors: Amanda M. McLeroy, Tiera Tanksley
Abstract:
The 2020 movement for Black Lives, ignited by anti-Black police brutality and exemplified by the public execution of George Floyd, underscored the dual potential of social media for political activism and perilous exposure to traumatic content for Black students. This study employs Critical Race Technology Theory (CRTT) to scrutinize algorithmic anti-blackness and its impact on Black youth's lives and educational experiences. The research investigates the consequences of vicarious exposure to police brutality on social media among Black adolescents through qualitative interviews and quantitative scale data. The findings reveal an unprecedented surge in exposure to viral police killings since 2020, resulting in profound physical, socioemotional, and educational effects on Black youth. CRTT forms the theoretical basis, challenging the notion of digital technologies as post-racial and neutral, aiming to dismantle systemic biases within digital systems. Black youth, averaging over 13 hours of daily social media use, face constant exposure to graphic images of Black individuals dying. The study connects this exposure to a range of physical, socioemotional, and mental health consequences, emphasizing the urgent need for understanding and support. The research proposes questions to explore the extent of police brutality exposure and its effects on Black youth. Qualitative interviews with high school and college students and quantitative scale data from undergraduates contribute to a nuanced understanding of the impact of police brutality exposure on Black youth. Themes of unprecedented exposure to viral police killings, physical and socioemotional effects, and educational consequences emerge from the analysis. The study uncovers how vicarious experiences of negative police encounters via social media lead to mistrust, fear, and psychosomatic symptoms among Black adolescents. Implications for educators and counselors are profound, emphasizing the cultivation of empathy, provision of mental health support, integration of media literacy education, and encouragement of activism. Recognizing family and community influences is crucial for comprehensive support. Professional development opportunities in culturally responsive teaching and trauma-informed approaches are recommended for educators. In conclusion, creating a supportive educational environment that addresses the emotional impact of social media exposure to police brutality is crucial for the well-being and development of Black adolescents. Counselors, through safe spaces and collaboration, play a vital role in supporting Black youth facing the distressing effects of social media exposure to police brutality.Keywords: black youth, mental health, police brutality, social media
Procedia PDF Downloads 56428 Prevention and Treatment of Hay Fever Prevalence by Natural Products: A Phytochemistry Study on India and Iran
Authors: Tina Naser Torabi
Abstract:
Prevalence of allergy is affected by different factors according to its base and seasonal weather changes, and it also needs various treatments.Although reasons of allergy existence are not clear but generally, allergens cause reaction between antigen and antibody because of their antigenic traits. In this state, allergens cause immune system to make mistake and identify safe material as threat, therefore function of immune system impaired because of histamine secretion. There are different reasons for allergy, but herbal reasons are on top of the list, although animal causes cannot be ignored. Important point is that allergenic compounds, cause making dedicated antibody, so in general every kind of allergy is different from the other one. Therefore, most of the plants in herbal allergenic category can cause various allergies for human beings, such as respiratory allergies, nutritional allergies, injection allergies, infection allergies, touch allergies, that each of them show different symptoms based on the reason of allergy and also each of them requires different prevention and treatment. Geographical condition is another effective factor in allergy. Seasonal changes, weather condition, herbal coverage variety play important roles in different allergies. It goes without saying that humid climate and herbal coverage variety in different seasons especially spring cause most allergies in human beings in Iran and India that are discussed in this article. These two countries are good choices for allergy prevalence because of their condition, various herbal coverage, human and animal factors. Hay fever is one of the allergies, although the reasons of its prevalence are unknown yet. It is one of the most popular allergies in Iran and India because of geographical, human, animal and herbal factors. Hay fever is on top of the list in these two countries. Significant point about these two countries is that herbal factor is the most important factor in prevalence of hay fever. Variety of herbal coverage especially in spring during herbal pollination is the main reason of hay fever prevalence in these two countries. Based on the research result of Pharmacognosy and Phytochemistry, pollination of some plants in spring is major reason of hay fever prevalence in these countries. If airborne pollens in pollination season enter the human body through air, they will cause allergic reactions in eyes, nasal mucosa, lungs, and respiratory system, and if these particles enter the body of potential person through food, they will cause allergic reactions in mouth, stomach, and other digestive systems. Occasionally, chemical materials produced by human body such as Histamine cause problems like: developing of nasal polyps, nasal blockage, sleep disturbance, risk of asthma developing, blood vasodilation, sneezing, eye tears, itching and swelling of eyes and nasal mucosa, Urticaria, decrease in blood pressure, and rarely trauma, anesthesia, anaphylaxis and finally death. This article is going to study the reasons of hay fever prevalence in Iran and India and presents prevention and treatment Method from Phytochemistry and Pharmocognocy point of view by using local natural products in these two countries.Keywords: hay fever, India, Iran, natural treatment, phytochemistry
Procedia PDF Downloads 168427 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application
Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough
Abstract:
In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.Keywords: casting, cast iron, microstructure, heat treating
Procedia PDF Downloads 106426 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 191425 Predictors of Response to Interferone Therapy in Chronic Hepatitis C Virus Infection
Authors: Ali Kassem, Ehab Fawzy, Mahmoud Sef el-eslam, Fatma Salah- Eldeen, El zahraa Mohamed
Abstract:
Introduction: The combination of interferon (INF) and ribavirin is the preferred treatment for chronic hepatitis C viral (HCV) infection. However, nonresponse to this therapy remains common and is associated with several factors such as HCV genotype and HCV viral load in addition to host factors such as sex, HLA type and cytokine polymorphisms. Aim of the work: The aim of this study was to determine predictors of response to (INF) therapy in chronic HCV infected patients treated with INF alpha and ribavirin combination therapy. Patients and Methods: The present study included 110 patients (62 males, 48 females) with chronic HCV infection. Their ages ranged from 20-59 years. Inclusion criteria were organized according to the protocol of the Egyptian National Committee for control of viral hepatitis. Patients included in this study were recruited to receive INF ribavirin combination therapy; 54 patients received pegylated NF α-2a (180 μg) and weight based ribavirin therapy (1000 mg if < 75 kg, 1200 mg if > 75 kg) for 48 weeks and 53 patients received pegylated INF α-2b (1.5 ug/kg/week) and weight based ribavirin therapy (800 mg if < 65 kg, 1000 mg if 65-75 kg and 1200 mg if > 75kg). One hundred and seven liver biopsies were included in the study and submitted to histopathological examination. Hematoxylin and eosin (H&E) stained sections were done to assess both the grade and the stage of chronic viral hepatitis, in addition to the degree of steatosis. Modified hepatic activity index (HAI) grading, modified Ishak staging and Metavir grading and staging systems were used. Laboratory follow up including: HCV PCR at the 12th week to assess the early virologic response (EVR) and at the 24th week were done. At the end of the course: HCV PCR was done at the end of the course and tested 6 months later to document end virologic response (ETR) and sustained virologic response (SVR) respectively. Results One hundred seven patients; 62 males (57.9 %) and 45 females (42.1%) completed the course and included in this study. The age of patients ranged from 20-59 years with a mean of 40.39±10.03 years. Six months after the end of treatment patients were categorized into two groups: Group (1): patients who achieved sustained virological response (SVR). Group (2): patients who didn't achieve sustained virological response (non SVR) including non-responders, breakthrough and relapsers. In our study, 58 (54.2%) patients showed SVR, 18 (16.8%) patients were non-responders, 15 (14%) patients showed break-through and 16 (15 %) patients were relapsers. Univariate binary regression analysis of the possible risk factors of non SVR showed that the significant factors were higher age, higher fasting insulin level, higher Metavir stage and higher grade of hepatic steatosis. Multivariate binary regression analysis showed that the only independent risk factor for non SVR was high fasting insulin level. Conclusion: Younger age, lower Metavir stage, lower steatosis grade and lower fasting insulin level are good predictors of SVR and could be used in predicting the treatment response of pegylated interferon/ribavirin therapy.Keywords: chronic HCV infection, interferon ribavirin combination therapy, predictors to antiviral therapy, treatment response
Procedia PDF Downloads 398424 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 140423 Investigating the Influence of Solidification Rate on the Microstructural, Mechanical and Physical Properties of Directionally Solidified Al-Mg Based Multicomponent Eutectic Alloys Containing High Mg Alloys
Authors: Fatih Kılıç, Burak Birol, Necmettin Maraşlı
Abstract:
The directional solidification process is generally used for homogeneous compound production, single crystal growth, and refining (zone refining), etc. processes. The most important two parameters that control eutectic structures are temperature gradient and grain growth rate which are called as solidification parameters The solidification behavior and microstructure characteristics is an interesting topic due to their effects on the properties and performance of the alloys containing eutectic compositions. The solidification behavior of multicomponent and multiphase systems is an important parameter for determining various properties of these materials. The researches have been conducted mostly on the solidification of pure materials or alloys containing two phases. However, there are very few studies on the literature about multiphase reactions and microstructure formation of multicomponent alloys during solidification. Because of this situation, it is important to study the microstructure formation and the thermodynamical, thermophysical and microstructural properties of these alloys. The production process is difficult due to easy oxidation of magnesium and therefore, there is not a comprehensive study concerning alloys containing high Mg (> 30 wt.% Mg). With the increasing amount of Mg inside Al alloys, the specific weight decreases, and the strength shows a slight increase, while due to formation of β-Al8Mg5 phase, ductility lowers. For this reason, production, examination and development of high Mg containing alloys will initiate the production of new advanced engineering materials. The original value of this research can be described as obtaining high Mg containing (> 30% Mg) Al based multicomponent alloys by melting under vacuum; controlled directional solidification with various growth rates at a constant temperature gradient; and establishing relationship between solidification rate and microstructural, mechanical, electrical and thermal properties. Therefore, within the scope of this research, some > 30% Mg containing ternary or quaternary Al alloy compositions were determined, and it was planned to investigate the effects of directional solidification rate on the mechanical, electrical and thermal properties of these alloys. Within the scope of the research, the influence of the growth rate on microstructure parameters, microhardness, tensile strength, electrical conductivity and thermal conductivity of directionally solidified high Mg containing Al-32,2Mg-0,37Si; Al-30Mg-12Zn; Al-32Mg-1,7Ni; Al-32,2Mg-0,37Fe; Al-32Mg-1,7Ni-0,4Si; Al-33,3Mg-0,35Si-0,11Fe (wt.%) alloys with wide range of growth rate (50-2500 µm/s) and fixed temperature gradient, will be investigated. The work can be planned as; (a) directional solidification of Al-Mg based Al-Mg-Si, Al-Mg-Zn, Al-Mg-Ni, Al-Mg-Fe, Al-Mg-Ni-Si, Al-Mg-Si-Fe within wide range of growth rates (50-2500 µm/s) at a constant temperature gradient by Bridgman type solidification system, (b) analysis of microstructure parameters of directionally solidified alloys by using an optical light microscopy and Scanning Electron Microscopy (SEM), (c) measurement of microhardness and tensile strength of directionally solidified alloys, (d) measurement of electrical conductivity by four point probe technique at room temperature (e) measurement of thermal conductivity by linear heat flow method at room temperature.Keywords: directional solidification, electrical conductivity, high Mg containing multicomponent Al alloys, microhardness, microstructure, tensile strength, thermal conductivity
Procedia PDF Downloads 262422 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling
Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather
Abstract:
New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling
Procedia PDF Downloads 194421 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules
Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury
Abstract:
Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies
Procedia PDF Downloads 124420 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal
Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi
Abstract:
The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.Keywords: low income, subsidy policy, distributed energy resources, energy justice
Procedia PDF Downloads 117419 Accelerating Personalization Using Digital Tools to Drive Circular Fashion
Authors: Shamini Dhana, G. Subrahmanya VRK Rao
Abstract:
The fashion industry is advancing towards a mindset of zero waste, personalization, creativity, and circularity. The trend of upcycling clothing and materials into personalized fashion is being demanded by the next generation. There is a need for a digital tool to accelerate the process towards mass customization. Dhana’s D/Sphere fashion technology platform uses digital tools to accelerate upcycling. In essence, advanced fashion garments can be designed and developed via reuse, repurposing, recreating activities, and using existing fabric and circulating materials. The D/Sphere platform has the following objectives: to provide (1) An opportunity to develop modern fashion using existing, finished materials and clothing without chemicals or water consumption; (2) The potential for an everyday customer and designer to use the medium of fashion for creative expression; (3) A solution to address the global textile waste generated by pre- and post-consumer fashion; (4) A solution to reduce carbon emissions, water, and energy consumption with the participation of all stakeholders; (5) An opportunity for brands, manufacturers, retailers to work towards zero-waste designs and as an alternative revenue stream. Other benefits of this alternative approach include sustainability metrics, trend prediction, facilitation of disassembly and remanufacture deep learning, and hyperheuristics for high accuracy. A design tool for mass personalization and customization utilizing existing circulating materials and deadstock, targeted to fashion stakeholders will lower environmental costs, increase revenues through up to date upcycled apparel, produce less textile waste during the cut-sew-stitch process, and provide a real design solution for the end customer to be part of circular fashion. The broader impact of this technology will result in a different mindset to circular fashion, increase the value of the product through multiple life cycles, find alternatives towards zero waste, and reduce the textile waste that ends up in landfills. This technology platform will be of interest to brands and companies that have the responsibility to reduce their environmental impact and contribution to climate change as it pertains to the fashion and apparel industry. Today, over 70% of the $3 trillion fashion and apparel industry ends up in landfills. To this extent, the industry needs such alternative techniques to both address global textile waste as well as provide an opportunity to include all stakeholders and drive circular fashion with new personalized products. This type of modern systems thinking is currently being explored around the world by the private sector, organizations, research institutions, and governments. This technological innovation using digital tools has the potential to revolutionize the way we look at communication, capabilities, and collaborative opportunities amongst stakeholders in the development of new personalized and customized products, as well as its positive impacts on society, our environment, and global climate change.Keywords: circular fashion, deep learning, digital technology platform, personalization
Procedia PDF Downloads 67418 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents
Authors: Rajender Dahiya
Abstract:
Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention
Procedia PDF Downloads 60417 Seasonal Variability of Picoeukaryotes Community Structure Under Coastal Environmental Disturbances
Authors: Benjamin Glasner, Carlos Henriquez, Fernando Alfaro, Nicole Trefault, Santiago Andrade, Rodrigo De La Iglesia
Abstract:
A central question in ecology refers to the relative importance that local-scale variables have over community composition, when compared with regional-scale variables. In coastal environments, strong seasonal abiotic influence dominates these systems, weakening the impact of other parameters like micronutrients. After the industrial revolution, micronutrients like trace metals have increased in ocean as pollutants, with strong effects upon biotic entities and biological processes in coastal regions. Coastal picoplankton communities had been characterized as a cyanobacterial dominated fraction, but in recent years the eukaryotic component of this size fraction has gained relevance due to their high influence in carbon cycle, although, diversity patterns and responses to disturbances are poorly understood. South Pacific upwelling coastal environments represent an excellent model to study seasonal changes due to a strong influence in the availability of macro- and micronutrients between seasons. In addition, some well constrained coastal bays of this region have been subjected to strong disturbances due to trace metal inputs. In this study, we aim to compare the influence of seasonality and trace metals concentrations, on the community structure of planktonic picoeukaryotes. To describe seasonal patterns in the study area, satellite data in a 6 years time series and in-situ measurements with a traditional oceanographic approach such as CTDO equipment were performed. In addition, trace metal concentrations were analyzed trough ICP-MS analysis, for the same region. For biological data collection, field campaigns were performed in 2011-2012 and the picoplankton community was described by flow cytometry and taxonomical characterization with next-generation sequencing of ribosomal genes. The relation between the abiotic and biotic components was finally determined by multivariate statistical analysis. Our data show strong seasonal fluctuations in abiotic parameters such as photosynthetic active radiation and superficial sea temperature, with a clear differentiation of seasons. However, trace metal analysis allows identifying strong differentiation within the study area, dividing it into two zones based on trace metals concentration. Biological data indicate that there are no major changes in diversity but a significant fluctuation in evenness and community structure. These changes are related mainly with regional parameters, like temperature, but by analyzing the metal influence in picoplankton community structure, we identify a differential response of some plankton taxa to metal pollution. We propose that some picoeukaryotic plankton groups respond differentially to metal inputs, by changing their nutritional status and/or requirements under disturbances as a derived outcome of toxic effects and tolerance.Keywords: Picoeukaryotes, plankton communities, trace metals, seasonal patterns
Procedia PDF Downloads 175416 Numerical Investigation of Thermal Energy Storage Panel Using Nanoparticle Enhanced Phase Change Material for Micro-Satellites
Authors: Jelvin Tom Sebastian, Vinod Yeldho Baby
Abstract:
In space, electronic devices are constantly attacked with radiation, which causes certain parts to fail or behave in unpredictable ways. To advance the thermal controllability for microsatellites, we need a new approach and thermal control system that is smaller than that on conventional satellites and that demand no electric power. Heat exchange inside the microsatellites is not that easy as conventional satellites due to the smaller size. With slight mass gain and no electric power, accommodating heat using phase change materials (PCMs) is a strong candidate for solving micro satellites' thermal difficulty. In other words, PCMs can absorb or produce heat in the form of latent heat, changing their phase and minimalizing the temperature fluctuation around the phase change point. The main restriction for these systems is thermal conductivity weakness of common PCMs. As PCM is having low thermal conductivity, it increases the melting and solidification time, which is not suitable for specific application like electronic cooling. In order to increase the thermal conductivity nanoparticles are introduced. Adding the nanoparticles in base PCM increases the thermal conductivity. Increase in weight concentration increases the thermal conductivity. This paper numerically investigates the thermal energy storage panel with nanoparticle enhanced phase change material. Silver nanostructure have increased the thermal properties of the base PCM, eicosane. Different weight concentration (1, 2, 3.5, 5, 6.5, 8, 10%) of silver enhanced phase change material was considered. Both steady state and transient analysis was performed to compare the characteristics of nanoparticle enhanced phase material at different heat loads. Results showed that in steady state, the temperature near the front panel reduced and temperature on NePCM panel increased as the weight concentration increased. With the increase in thermal conductivity more heat was absorbed into the NePCM panel. In transient analysis, it was found that the effect of nanoparticle concentration on maximum temperature of the system was reduced as the melting point of the material reduced with increase in weight concentration. But for the heat load of maximum 20W, the model with NePCM did not attain the melting point temperature. Therefore it showed that the model with NePCM is capable of holding more heat load. In order to study the heat load capacity double the load is given, maximum of 40W was given as first half of the cycle and the other is given constant OW. Higher temperature was obtained comparing the other heat load. The panel maintained a constant temperature for a long duration according to the NePCM melting point. In both the analysis, the uniformity of temperature of the TESP was shown. Using Ag-NePCM it allows maintaining a constant peak temperature near the melting point. Therefore, by altering the weight concentration of the Ag-NePCM it is possible to create an optimum operating temperature required for the effective working of the electronics components.Keywords: carbon-fiber-reinforced polymer, micro/nano-satellite, nanoparticle phase change material, thermal energy storage
Procedia PDF Downloads 205415 Management of Mycotoxin Production and Fungicide Resistance by Targeting Stress Response System in Fungal Pathogens
Authors: Jong H. Kim, Kathleen L. Chan, Luisa W. Cheng
Abstract:
Control of fungal pathogens, such as foodborne mycotoxin producers, is problematic as effective antimycotic agents are often very limited. Mycotoxin contamination significantly interferes with the safe production of foods or crops worldwide. Moreover, expansion of fungal resistance to commercial drugs or fungicides is a global human health concern. Therefore, there is a persistent need to enhance the efficacy of commercial antimycotic agents or to develop new intervention strategies. Disruption of the cellular antioxidant system should be an effective method for pathogen control. Such disruption can be achieved with safe, redox-active compounds. Natural phenolic derivatives are potent redox cyclers that inhibit fungal growth through destabilization of the cellular antioxidant system. The goal of this study is to identify novel, redox-active compounds that disrupt the fungal antioxidant system. The identified compounds could also function as sensitizing agents to conventional antimycotics (i.e., chemosensitization) to improve antifungal efficacy. Various benzo derivatives were tested against fungal pathogens. Gene deletion mutants of the yeast Saccharomyces cerevisiae were used as model systems for identifying molecular targets of benzo analogs. The efficacy of identified compounds as potent antifungal agents or as chemosensitizing agents to commercial drugs or fungicides was examined with methods outlined by the Clinical Laboratory Standards Institute or the European Committee on Antimicrobial Susceptibility Testing. Selected benzo derivatives possessed potent antifungal or antimycotoxigenic activity. Molecular analyses by using S. cerevisiae mutants indicated antifungal activity of benzo derivatives was through disruption of cellular antioxidant or cell wall integrity system. Certain benzo analogs screened overcame tolerance of Aspergillus signaling mutants, namely mitogen-activated protein kinase mutants, to fludioxonil fungicide. Synergistic antifungal chemosensitization greatly lowered minimum inhibitory or fungicidal concentrations of test compounds, including inhibitors of mitochondrial respiration. Of note, salicylaldehyde is a potent antimycotic volatile that has some practical application as a fumigant. Altogether, benzo derivatives targeting cellular antioxidant system of fungi (along with cell wall integrity system) effectively suppress fungal growth. Candidate compounds possess the antifungal, antimycotoxigenic or chemosensitizing capacity to augment the efficacy of commercial antifungals. Therefore, chemogenetic approaches can lead to the development of novel antifungal intervention strategies, which enhance the efficacy of established microbe intervention practices and overcome drug/fungicide resistance. Chemosensitization further reduces costs and alleviates negative side effects associated with current antifungal treatments.Keywords: antifungals, antioxidant system, benzo derivatives, chemosensitization
Procedia PDF Downloads 263414 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 527413 Effects of Bipolar Plate Coating Layer on Performance Degradation of High-Temperature Proton Exchange Membrane Fuel Cell
Authors: Chen-Yu Chen, Ping-Hsueh We, Wei-Mon Yan
Abstract:
Over the past few centuries, human requirements for energy have been met by burning fossil fuels. However, exploiting this resource has led to global warming and innumerable environmental issues. Thus, finding alternative solutions to the growing demands for energy has recently been driving the development of low-carbon and even zero-carbon energy sources. Wind power and solar energy are good options but they have the problem of unstable power output due to unpredictable weather conditions. To overcome this problem, a reliable and efficient energy storage sub-system is required in future distributed-power systems. Among all kinds of energy storage technologies, the fuel cell system with hydrogen storage is a promising option because it is suitable for large-scale and long-term energy storage. The high-temperature proton exchange membrane fuel cell (HT-PEMFC) with metallic bipolar plates is a promising fuel cell system because an HT-PEMFC can tolerate a higher CO concentration and the utilization of metallic bipolar plates can reduce the cost of the fuel cell stack. However, the operating life of metallic bipolar plates is a critical issue because of the corrosion phenomenon. As a result, in this work, we try to apply different coating layer on the metal surface and to investigate the protection performance of the coating layers. The tested bipolar plates include uncoated SS304 bipolar plates, titanium nitride (TiN) coated SS304 bipolar plates and chromium nitride (CrN) coated SS304 bipolar plates. The results show that the TiN coated SS304 bipolar plate has the lowest contact resistance and through-plane resistance and has the best cell performance and operating life among all tested bipolar plates. The long-term in-situ fuel cell tests show that the HT-PEMFC with TiN coated SS304 bipolar plates has the lowest performance decay rate. The second lowest is CrN coated SS304 bipolar plate. The uncoated SS304 bipolar plate has the worst performance decay rate. The performance decay rates with TiN coated SS304, CrN coated SS304 and uncoated SS304 bipolar plates are 5.324×10⁻³ % h⁻¹, 4.513×10⁻² % h⁻¹ and 7.870×10⁻² % h⁻¹, respectively. In addition, the EIS results indicate that the uncoated SS304 bipolar plate has the highest growth rate of ohmic resistance. However, the ohmic resistance with the TiN coated SS304 bipolar plates only increases slightly with time. The growth rate of ohmic resistances with TiN coated SS304, CrN coated SS304 and SS304 bipolar plates are 2.85×10⁻³ h⁻¹, 3.56×10⁻³ h⁻¹, and 4.33×10⁻³ h⁻¹, respectively. On the other hand, the charge transfer resistances with these three bipolar plates all increase with time, but the growth rates are all similar. In addition, the effective catalyst surface areas with all bipolar plates do not change significantly with time. Thus, it is inferred that the major reason for the performance degradation is the elevated ohmic resistance with time, which is associated with the corrosion and oxidation phenomena on the surface of the stainless steel bipolar plates.Keywords: coating layer, high-temperature proton exchange membrane fuel cell, metallic bipolar plate, performance degradation
Procedia PDF Downloads 283412 A Systematic Review of Forest School for Early Childhood Education in China: Lessons Learned from European Studies from a Perspective of Ecological System
Authors: Xiaoying Zhang
Abstract:
Forest school – an outdoor educational experience that is undertaken in an outdoor environment with trees – becomes an emerging field of early childhood education recently. In China, the benefits of natural outdoor education to children and young people’s wellness have raised attention. Although different types of outdoor-based activities have been involved in some pre-school of China, few study and practice have been conducted in terms of the notion of forest school. To comprehend the impact of forest school for children and young people, this study aims to systematically review articles on the topic of forest school in preschool education from an ecological perspective, i.e. from individual level (e.g., behavior and mental health) to microsystem level (e.g., the relationship between teachers and children) to ecosystem level. Based on PRISMA framework flow, using the key words of “Forest School” and “Early Childhood Education” for searching in Web-of-science database, a total of 33 articles were identified. Sample participants of 13 studies were not preschool children, five studies were not on forest school theme, and two literature review articles were excluded for further analysis. Finally, 13 articles were eligible for thematic analysis. According to Bronfenbrenner's ecological systems theory, there are some fingdings, on the individual level, current forest school studies are concerned about the children behavioral experience in forest school, how these experience may relate to their achievement or to develop children’s wellbeing/wellness, and how this type of learning experience may enhance children’s self-awareness on risk and safety issues. On the microsystem/mesosystem level, this review indicated that pedagogical development for forest school, risk perception from teachers and parents, social development between peers, and adult’s role in the participation of forest school were concerned, explored and discussed most frequently. On the macrosystem, the conceptualization of forest school is the key theme. Different forms of presentation in various countries with diverse cultures could provide various models of forest school education. However, there was no study investigating forest school on an ecosystem level. As for the potential benefits of physical health and mental wellness that results from forest school, it informs us to reflect the system of preschool education from the ecological perspective for Chinese children. For instance, most Chinese kindergartens ignored the significance of natural outdoor activities for children. Preschool education in China is strongly oriented by primary school system, which means pre-school children are expected to be trained as primary school students to do different subjects, such as math. Hardly any kindergarteners provide the opportunities for children and young people to take risks in a natural environment like forest school does. However, merely copying forest school model for a Chinese preschool education system will be less effective. This review of different level concerns could inform us that the localization the idea of forest school to adapt to a Chinese political, educational and cultural background. More detailed results and profound discussions will be presented in the full paper.Keywords: early childhood education, ecological system, education development prospects in China, forest school
Procedia PDF Downloads 154411 Sustainable Transition of Universal Design for Learning-Based Teachers’ Latent Profiles from Contact to Distance Education
Authors: Alvyra Galkienė, Ona Monkevičienė
Abstract:
The full participation of all pupils in the overall educational process is defined by the concept of inclusive education, which is gradually evolving in education policy and practice. It includes the full participation of all pupils in a shared learning experience and educational practices that address barriers to learning. Inclusive education applying the principles of Universal Design for Learning (UDL), which includes promoting students' involvement in learning processes, guaranteeing a deep understanding of the analysed phenomena, initiating self-directed learning, and using e-tools to create a barrier-free environment, is a prerequisite for the personal success of each pupil. However, the sustainability of quality education is affected by the transformation of education systems. This was particularly evident during the period of the forced transition from contact to distance education in the COVID-19 pandemic. Research Problem: The transformation of the educational environment from real to virtual one and the loss of traditional forms of educational support highlighted the need for new research, revealing the individual profiles of teachers using UDL-based learning and the pathways of sustainable transfer of successful practices to non-conventional learning environments. Research Methods: In order to identify individual latent teacher profiles that encompass the essential components of UDL-based inclusive teaching and direct leadership of students' learning, the quantitative analysis software Mplius was used for latent profile analysis (LPA). In order to reveal proven, i.e., sustainable, pathways for the transit of the components of UDL-based inclusive learning to distance learning, latent profile transit analysis (LPTA) via Mplius was used. An online self-reported questionnaire was used for data collection. It consisted of blocks of questions designed to reveal the experiences of subject teachers in contact and distance learning settings. 1432 Lithuanian, Latvian, and Estonian subject teachers took part in the survey. Research Results: The LPA analysis revealed eight latent teacher profiles with different characteristics of UDL-based inclusive education or traditional teaching in contact teaching conditions. Only 4.1% of the subject teachers had a profile characterised by a sustained UDL approach to teaching: promoting pupils' self-directed learning; empowering pupils' engagement, understanding, independent action, and expression; promoting pupils' e-inclusion; and reducing the teacher's direct supervision of the students. Other teacher profiles were characterised by limited UDL-based inclusive education either due to the lack of one or more of its components or to the predominance of direct teacher guidance. The LPTA analysis allowed us to highlight the following transit paths of teacher profiles in the extreme conditions of the transition from contact to distance education: teachers staying in the same profile of UDL-based inclusive education (sustainable transit) or jumping to other profiles (unsustainable transit in case of barriers), and teachers from other profiles moving to this profile (ongoing transit taking advantage of the changed new possibilities in the teaching process).Keywords: distance education, latent teacher profiles, sustainable transit, UDL
Procedia PDF Downloads 102410 Management of Urine Recovery at the Building Level
Authors: Joao Almeida, Ana Azevedo, Myriam Kanoun-Boule, Maria Ines Santos, Antonio Tadeu
Abstract:
The effects of the increasing expansion of cities and climate changes have encouraged European countries and regions to adopt nature-based solutions with ability to mitigate environmental issues and improve life in cities. Among these strategies, green roofs and urban gardens have been considered ingenious solutions, since they have the desirable potential to improve air quality, prevent floods, reduce the heat island effect and restore biodiversity in cities. However, an additional consumption of fresh water and mineral nutrients is necessary to sustain larger green urban areas. This communication discusses the main technical features of a new system to manage urine recovery at the building level and its application in green roofs. The depletion of critical nutrients like phosphorus constitutes an emergency. In turn, their elimination through urine is one of the principal causes for their loss. Thus, urine recovery in buildings may offer numerous advantages, constituting a valuable fertilizer abundantly available in cities and reducing the load on wastewater treatment plants. Although several urine-diverting toilets have been developed for this purpose and some experiments using urine directly in agriculture have already been carried out in Europe, several challenges have emerged with this practice concerning collection, sanitization, storage and application of urine in buildings. To our best knowledge, current buildings are not designed to receive these systems and integrated solutions with ability to self-manage the whole process of urine recovery, including separation, maturation and storage phases, are not known. Additionally, if from a hygiene point of view human urine may be considered a relatively safe fertilizer, the risk of disease transmission needs to be carefully analysed. A reduction in microorganisms can be achieved by storing the urine in closed tanks. However, several factors may affect this process, which may result in a higher survival rate for some pathogens. In this work, urine effluent was collected under real conditions, stored in closed containers and kept in climatic chambers under variable conditions simulating cold, temperate and tropical climates. These samples were subjected to a first physicochemical and microbiological control, which was repeated over time. The results obtained so far suggest that maturation conditions were reached for all the three temperatures and that a storage period of less than three months is required to achieve a strong depletion of microorganisms. The authors are grateful for the Project WashOne (POCI-01-0247-FEDER-017461) funded by the Operational Program for Competitiveness and Internationalization (POCI) of Portugal 2020, with the support of the European Regional Development Fund (FEDER).Keywords: sustainable green roofs and urban gardens, urban nutrient cycle, urine-based fertilizers, urine recovery in buildings
Procedia PDF Downloads 166409 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography
Authors: Devansh Desai, Rahul Nigam
Abstract:
Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration
Procedia PDF Downloads 73408 Importance of Geologists at Municipalities. Colombian Case
Authors: Clemencia Gomez
Abstract:
Geology is currently absent from Colombia's education system, leading to a lack of geological awareness that hinders essential scientific training about Earth and its spatial and temporal dimensions. Understanding geological concepts is crucial for tackling challenges like climate change, sustainable resource management, geological risk mitigation, and groundwater management. Citizens have the right to receive a comprehensive scientific education that enhances their critical thinking regarding social, environmental, and economic issues. Geological sciences are vital in this context, as they enable the sustainable use of the planet's resources and effective management of human impacts. Additionally, geoethics should be integral to every citizen's education, highlighting the necessity of responsibly utilizing natural resources found in the Earth's surface and subsurface, which are fundamental to many everyday products. The Colombian associations of Geology aims to address these gaps by advocating for the appointment of geologists in municipalities. These professionals would assist in reviewing technical aspects of urban planning, identifying geological risks, pinpointing water supply opportunities, supporting sustainable mineral-energy projects, and promoting geological education in schools. The role of a professional in Earth sciences is crucial for municipalities for several reasons: Natural Resource Management: Earth scientists help in managing and conserving natural resources such as water, minerals, and soil. Their expertise ensures sustainable use and helps prevent depletion. Environmental Protection: They assess environmental impacts and advise on policies to protect ecosystems and biodiversity. This is vital for maintaining the health of local environments. Disaster Preparedness and Response: Professionals in this field analyze geological hazards like earthquakes, floods, and landslides. They contribute to developing early warning systems and emergency response plans, which can save lives and property. Climate Change Mitigation: Earth scientists study climate patterns and contribute to strategies for mitigating climate change impacts. This includes advising on land use planning and developing resilience strategies for communities. Urban Planning and Development: Their expertise is essential in urban planning, ensuring that infrastructure development considers geological and environmental factors. This helps prevent construction in hazardous areas and promotes sustainable development. Public Education and Awareness: They play a vital role in educating the public about Earth-related issues, fostering greater community engagement in environmental conservation and disaster preparedness. In summary, professionals in Earth sciences significantly contribute to the sustainability, safety, and well-being of municipalities and their residents.Keywords: social geology, safety, sustainability, municipalities
Procedia PDF Downloads 14407 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 293