Search results for: biomechanics and clinical applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9926

Search results for: biomechanics and clinical applications

26 Consumer Preferences for Low-Carbon Futures: A Structural Equation Model Based on the Domestic Hydrogen Acceptance Framework

Authors: Joel A. Gordon, Nazmiye Balta-Ozkan, Seyed Ali Nabavi

Abstract:

Hydrogen-fueled technologies are rapidly advancing as a critical component of the low-carbon energy transition. In countries historically reliant on natural gas for home heating, such as the UK, hydrogen may prove fundamental for decarbonizing the residential sector, alongside other technologies such as heat pumps and district heat networks. While the UK government is set to take a long-term policy decision on the role of domestic hydrogen by 2026, there are considerable uncertainties regarding consumer preferences for ‘hydrogen homes’ (i.e., hydrogen-fueled appliances for space heating, hot water, and cooking. In comparison to other hydrogen energy technologies, such as road transport applications, to date, few studies have engaged with the social acceptance aspects of the domestic hydrogen transition, resulting in a stark knowledge deficit and pronounced risk to policymaking efforts. In response, this study aims to safeguard against undesirable policy measures by revealing the underlying relationships between the factors of domestic hydrogen acceptance and their respective dimensions: attitudinal, socio-political, community, market, and behavioral acceptance. The study employs an online survey (n=~2100) to gauge how different UK householders perceive the proposition of switching from natural gas to hydrogen-fueled appliances. In addition to accounting for housing characteristics (i.e., housing tenure, property type and number of occupants per dwelling) and several other socio-structural variables (e.g. age, gender, and location), the study explores the impacts of consumer heterogeneity on hydrogen acceptance by recruiting respondents from across five distinct groups: (1) fuel poor householders, (2) technology engaged householders, (3) environmentally engaged householders, (4) technology and environmentally engaged householders, and (5) a baseline group (n=~700) which filters out each of the smaller targeted groups (n=~350). This research design reflects the notion that supporting a socially fair and efficient transition to hydrogen will require parallel engagement with potential early adopters and demographic groups impacted by fuel poverty while also accounting strongly for public attitudes towards net zero. Employing a second-order multigroup confirmatory factor analysis (CFA) in Mplus, the proposed hydrogen acceptance model is tested to fit the data through a partial least squares (PLS) approach. In addition to testing differences between and within groups, the findings provide policymakers with critical insights regarding the significance of knowledge and awareness, safety perceptions, perceived community impacts, cost factors, and trust in key actors and stakeholders as potential explanatory factors of hydrogen acceptance. Preliminary results suggest that knowledge and awareness of hydrogen are positively associated with support for domestic hydrogen at the household, community, and national levels. However, with the exception of technology and/or environmentally engaged citizens, much of the population remains unfamiliar with hydrogen and somewhat skeptical of its application in homes. Knowledge and awareness present as critical to facilitating positive safety perceptions, alongside higher levels of trust and more favorable expectations for community benefits, appliance performance, and potential cost savings. Based on these preliminary findings, policymakers should be put on red alert about diffusing hydrogen into the public consciousness in alignment with energy security, fuel poverty, and net-zero agendas.

Keywords: hydrogen homes, social acceptance, consumer heterogeneity, heat decarbonization

Procedia PDF Downloads 114
25 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas

Authors: Julien Caudeville, Muriel Ismert

Abstract:

Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.

Keywords: health risk, environment, composite indicator, hotspot areas

Procedia PDF Downloads 247
24 Recrystallization Behavior and Microstructural Evolution of Nickel Base Superalloy AD730 Billet during Hot Forging at Subsolvus Temperatures

Authors: Marcos Perez, Christian Dumont, Olivier Nodin, Sebastien Nouveau

Abstract:

Nickel superalloys are used to manufacture high-temperature rotary engine parts such as high-pressure disks in gas turbine engines. High strength at high operating temperatures is required due to the levels of stress and heat the disk must withstand. Therefore it is necessary parts made from materials that can maintain mechanical strength at high temperatures whilst remain comparatively low in cost. A manufacturing process referred to as the triple melt process has made the production of cast and wrought (C&W) nickel superalloys possible. This means that the balance of cost and performance at high temperature may be optimized. AD730TM is a newly developed Ni-based superalloy for turbine disk applications, with reported superior service properties around 700°C when compared to Inconel 718 and several other alloys. The cast ingot is converted into billet during either cogging process or open die forging. The semi-finished billet is then further processed into its final geometry by forging, heat treating, and machining. Conventional ingot-to-billet conversion is an expensive and complex operation, requiring a significant amount of steps to break up the coarse as-cast structure and interdendritic regions. Due to the size of conventional ingots, it is difficult to achieve a uniformly high level of strain for recrystallization, resulting in non-recrystallized regions that retain large unrecrystallized grains. Non-uniform grain distributions will also affect the ultrasonic inspectability response, which is used to find defects in the final component. The main aim is to analyze the recrystallization behavior and microstructural evolution of AD730 at subsolvus temperatures from a semi-finished product (billet) under conditions representative of both cogging and hot forging operations. Special attention to the presence of large unrecrystallized grains was paid. Double truncated cones (DTCs) were hot forged at subsolvus temperatures in hydraulic press, followed by air cooling. SEM and EBSD analysis were conducted in the as-received (billet) and the as-forged conditions. AD730 from billet alloy presents a complex microstructure characterized by a mixture of several constituents. Large unrecrystallized grains present a substructure characterized by large misorientation gradients with the formation of medium to high angle boundaries in their interior, especially close to the grain boundaries, denoting inhomogeneous strain distribution. A fine distribution of intragranular precipitates was found in their interior, playing a key role on strain distribution and subsequent recrystallization behaviour during hot forging. Continuous dynamic recrystallization (CDRX) mechanism was found to be operating in the large unrecrystallized grains, promoting the formation intragranular DRX grains and the gradual recrystallization of these grains. Evidences that hetero-epitaxial recrystallization mechanism is operating in AD730 billet material were found. Coherent γ-shells around primary γ’ precipitates were found. However, no significant contribution to the overall recrystallization during hot forging was found. By contrast, strain presents the strongest effect on the microstructural evolution of AD730, increasing the recrystallization fraction and refining the structure. Regions with low level of deformation (ε ≤ 0.6) were translated into large fractions of unrecrystallized structures (strain accumulation). The presence of undissolved secondary γ’ precipitates (pinning effect), prior to hot forging operations, could explain these results.

Keywords: AD730 alloy, continuous dynamic recrystallization, hot forging, γ’ precipitates

Procedia PDF Downloads 199
23 Experimental Study on Granulated Steel Slag as an Alternative to River Sand

Authors: K. Raghu, M. N. Vathhsala, Naveen Aradya, Sharth

Abstract:

River sand is the most preferred fine aggregate for mortar and concrete. River sand is a product of natural weathering of rocks over a period of millions of years and is mined from river beds. Sand mining has disastrous environmental consequences. The excessive mining of river bed is creating an ecological imbalance. This has lead to have restrictions imposed by ministry of environment on sand mining. Driven by the acute need for sand, stone dust or manufactured sand prepared from the crushing and screening of coarse aggregate is being used as sand in the recent past. However manufactured sand is also a natural material and has quarrying and quality issues. To reduce the burden on the environment, alternative materials to be used as fine aggregates are being extensively investigated all over the world. Looking to the quantum of requirements, quality and properties there has been a global consensus on a material – Granulated slags. Granulated slag has been proven as a suitable material for replacing natural sand / crushed fine aggregates. In developed countries, the use of granulated slag as fine aggregate to replace natural sand is well established and is in regular practice. In the present paper Granulated slag has been experimented for usage in mortar. Slags are the main by-products generated during iron and steel production in the steel industry. Over the past decades, the steel production has increased and, consequently, the higher volumes of by-products and residues generated which have driven to the reuse of these materials in an increasingly efficient way. In recent years new technologies have been developed to improve the recovery rates of slags. Increase of slags recovery and use in different fields of applications like cement making, construction and fertilizers help in preserving natural resources. In addition to the environment protection, these practices produced economic benefits, by providing sustainable solutions that can allow the steel industry to achieve its ambitious targets of “zero waste” in coming years. Slags are generated at two different stages of steel production, iron making and steel making known as BF(Blast Furnace) slag and steel slag respectively. The slagging agent or fluxes, such as lime stone, dolomite and quartzite added into BF or steel making furnaces in order to remove impurities from ore, scrap and other ferrous charges during smelting. The slag formation is the result of a complex series of physical and chemical reactions between the non-metallic charge(lime stone, dolomite, fluxes), the energy sources(coal, coke, oxygen, etc.) and refractory materials. Because of the high temperatures (about 15000 C) during their generation, slags do not contain any organic substances. Due to the fact that slags are lighter than the liquid metal, they float and get easily removed. The slags protect the metal bath from atmosphere and maintain temperature through a kind of liquid formation. These slags are in liquid state and solidified in air after dumping in the pit or granulated by impinging water systems. Generally, BF slags are granulated and used in cement making due to its high cementious properties, and steel slags are mostly dumped due to unfavourable physio-chemical conditions. The increasing dump of steel slag not only occupies a plenty of land but also wastes resources and can potentially have an impact on the environment due to water pollution. Since BF slag contains little Fe and can be used directly. BF slag has found a wide application, such as cement production, road construction, Civil Engineering work, fertilizer production, landfill daily cover, soil reclamation, prior to its application outside the iron and steel making process.

Keywords: steel slag, river sand, granulated slag, environmental

Procedia PDF Downloads 244
22 Successful Optimization of a Shallow Marginal Offshore Field and Its Applications

Authors: Kumar Satyam Das, Murali Raghunathan

Abstract:

This note discusses the feasibility of field development of a challenging shallow offshore field in South East Asia and how its learnings can be applied to marginal field development across the world especially developing marginal fields in this low oil price world. The field was found to be economically challenging even during high oil prices and the project was put on hold. Shell started development study with the aim to significantly reduce cost through competitively scoping and revive stranded projects. The proposed strategy to achieve this involved Improve Per platform recovery and Reduction in CAPEX. Methodology: Based on various Benchmarking Tool such as Woodmac for similar projects in the region and economic affordability, a challenging target of 50% reduction in unit development cost (UDC) was set for the project. Technical scope was defined to the minimum as to be a wellhead platform with minimum functionality to ensure production. The evaluation of key project decisions like Well location and number, well design, Artificial lift methods and wellhead platform type under different development concept was carried out through integrated multi-discipline approach. Key elements influencing per platform recovery were Wellhead Platform (WHP) location, Well count, well reach and well productivity. Major Findings: Reservoir being shallow posed challenges in well design (dog-leg severity, casing size and the achievable step-out), choice of artificial lift and sand-control method. Integrated approach amongst relevant disciplines with challenging mind-set enabled to achieve optimized set of development decisions. This led to significant improvement in per platform recovery. It was concluded that platform recovery largely depended on the reach of the well. Choice of slim well design enabled designing of high inclination and better productivity wells. However, there is trade-off between high inclination Gas Lift (GL) wells and low inclination wells in terms of long term value, operational complexity, well reach, recovery and uptime. Well design element like casing size, well completion, artificial lift and sand control were added successively over the minimum technical scope design leading to a value and risk staircase. Logical combinations of options (slim well, GL) were competitively screened to achieve 25% reduction in well cost. Facility cost reduction was achieved through sourcing standardized Low Cost Facilities platform in combination with portfolio execution to maximizing execution efficiency; this approach is expected to reduce facilities cost by ~23% with respect to the development costs. Further cost reductions were achieved by maximizing use of existing facilities nearby; changing reliance on existing water injection wells and utilizing existing water injector (W.I.) platform for new injectors. Conclusion: The study provides a spectrum of technically feasible options. It also made clear that different drivers lead to different development concepts and the cost value trade off staircase made this very visible. Scoping of the project through competitive way has proven to be valuable for decision makers by creating a transparent view of value and associated risks/uncertainty/trade-offs for difficult choices: elements of the projects can be competitive, whilst other parts will struggle, even though contributing to significant volumes. Reduction in UDC through proper scoping of present projects and its benchmarking paves as a learning for the development of marginal fields across the world, especially in this low oil price scenario. This way of developing a field has on average a reduction of 40% of cost for the Shell projects.

Keywords: benchmarking, full field development, CAPEX, feasibility

Procedia PDF Downloads 158
21 Northern Nigeria Vaccine Direct Delivery System

Authors: Evelyn Castle, Adam Thompson

Abstract:

Background: In 2013, the Kano State Primary Health Care Management Board redesigned its Routine immunization supply chain from diffused pull to direct delivery push. It addressed issues around stockouts and reduced time spent by health facility staff collecting, and reporting on vaccine usage. The health care board sought the help of a 3PL for twice-monthly deliveries from its cold store to 484 facilities across 44 local governments. eHA’s Health Delivery Systems group formed a 3PL to serve 326 of these new facilities in partnership with the State. We focused on designing and implementing a technology system throughout. Basic methodologies: GIS Mapping: - Planning the delivery of vaccines to hundreds of health facilities requires detailed route planning for delivery vehicles. Mapping the road networks across Kano and Bauchi with a custom routing tool provided information for the optimization of deliveries. Reducing the number of kilometers driven each round by 20%, - reducing cost and delivery time. Direct Delivery Information System: - Vaccine Direct Deliveries are facilitated through pre-round planning (driven by health facility database, extensive GIS, and inventory workflow rules), manager and driver control panel customizing delivery routines and reporting, progress dashboard, schedules/routes, packing lists, delivery reports, and driver data collection applications. Move: Last Mile Logistics Management System: - MOVE has improved vaccine supply information management to be timely, accurate and actionable. Provides stock management workflow support, alerts management for cold chain exceptions/stock outs, and on-device analytics for health and supply chain staff. Software was built to be offline-first with user-validated interface and experience. Deployed to hundreds of vaccine storage site the improved information tools helps facilitate the process of system redesign and change management. Findings: - Stock-outs reduced from 90% to 33% - Redesigned current health systems and managing vaccine supply for 68% of Kano’s wards. - Near real time reporting and data availability to track stock. - Paperwork burdens of health staff have been dramatically reduced. - Medicine available when the community needs it. - Consistent vaccination dates for children under one to prevent polio, yellow fever, tetanus. - Higher immunization rates = Lower infection rates. - Hundreds of millions of Naira worth of vaccines successfully transported. - Fortnightly service to 326 facilities in 326 wards across 30 Local Government areas. - 6,031 cumulative deliveries. - Over 3.44 million doses transported. - Minimum travel distance covered in a round of delivery is 2000 kms & maximum of 6297 kms. - 153,409 kms travelled by 6 drivers. - 500 facilities in 326 wards. - Data captured and synchronized for the first time. - Data driven decision making now possible. Conclusion: eHA’s Vaccine Direct delivery has met challenges in Kano and Bauchi State and provided a reliable delivery service of vaccinations that ensure t health facilities can run vaccination clinics for children under one. eHA uses innovative technology that delivers vaccines from Northern Nigerian zonal stores straight to healthcare facilities. Helped healthcare workers spend less time managing supplies and more time delivering care, and will be rolled out nationally across Nigeria.

Keywords: direct delivery information system, health delivery system, GIS mapping, Northern Nigeria, vaccines

Procedia PDF Downloads 373
20 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop

Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen

Abstract:

Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.

Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.

Procedia PDF Downloads 41
19 Bio-Electro Chemical Catalysis: Redox Interactions, Storm and Waste Water Treatment

Authors: Michael Radwan Omary

Abstract:

Context: This scientific innovation demonstrate organic catalysis engineered media effective desalination of surface and groundwater. The author has developed a technology called “Storm-Water Ions Filtration Treatment” (SWIFTTM) cold reactor modules designed to retrofit typical urban street storm drains or catch basins. SWIFT triggers biochemical redox reactions with water stream-embedded toxic total dissolved solids (TDS) and electrical conductivity (EC). SWIFTTM Catalysts media unlock the sub-molecular bond energy, break down toxic chemical bonds, and neutralize toxic molecules, bacteria and pathogens. Research Aim: This research aims to develop and design lower O&M cost, zero-brine discharge, energy input-free, chemical-free water desalination and disinfection systems. The objective is to provide an effective resilient and sustainable solution to urban storm-water and groundwater decontamination and disinfection. Methodology: We focused on the development of organic, non-chemical, no-plugs, no pumping, non-polymer and non-allergenic approaches for water and waste water desalination and disinfection. SWIFT modules operate by directing the water stream to flow freely through the electrically charged media cold reactor, generating weak interactions with a water-dissolved electrically conductive molecule, resulting in the neutralization of toxic molecules. The system is powered by harvesting sub-molecular bonds embedded in energy. Findings: The SWIFTTM Technology case studies at CSU-CI and CSU-Fresno Water Institute, demonstrated consistently high reduction of all 40 detected waste-water pollutants including pathogens to levels below a state of California Department of Water Resources “Drinking Water Maximum Contaminants Levels”. The technology has proved effective in reducing pollutants such as arsenic, beryllium, mercury, selenium, glyphosate, benzene, and E. coli bacteria. The technology has also been successfully applied to the decontamination of dissolved chemicals, water pathogens, organic compounds and radiological agents. Theoretical Importance: SWIFT technology development, design, engineering, and manufacturing, offer cutting-edge advancement in achieving clean-energy source bio-catalysis media solution, an energy input free water and waste water desalination and disinfection. A significant contribution to institutions and municipalities achieving sustainable, lower cost, zero-brine and zero CO2 discharges clean energy water desalination. Data Collection and Analysis Procedures: The researchers collected data on the performance of the SWIFTTM technology in reducing the levels of various pollutants in water. The data was analyzed by comparing the reduction achieved by the SWIFTTM technology to the Drinking Water Maximum Contaminants Levels set by the state of California. The researchers also conducted live oral presentations to showcase the applications of SWIFTTM technology in storm water capture and decontamination as well as providing clean drinking water during emergencies. Conclusion: The SWIFTTM Technology has demonstrated its capability to effectively reduce pollutants in water and waste water to levels below regulatory standards. The Technology offers a sustainable solution to groundwater and storm-water treatments. Further development and implementation of the SWIFTTM Technology have the potential to treat storm water to be reused as a new source of drinking water and an ambient source of clean and healthy local water for recharge of ground water.

Keywords: catalysis, bio electro interactions, water desalination, weak-interactions

Procedia PDF Downloads 67
18 Metal-Organic Frameworks-Based Materials for Volatile Organic Compounds Sensing Applications: Strategies to Improve Sensing Performances

Authors: Claudio Clemente, Valentina Gargiulo, Alessio Occhicone, Giovanni Piero Pepe, Giovanni Ausanio, Michela Alfè

Abstract:

Volatile organic compound (VOC) emissions represent a serious risk to human health and the integrity of the ecosystems, especially at high concentrations. For this reason, it is very important to continuously monitor environmental quality and develop fast and reliable portable sensors to allow analysis on site. Chemiresistors have become promising candidates for VOC sensing as their ease of fabrication, variety of suitable sensitive materials, and simple sensing data. A chemoresistive gas sensor is a transducer that allows to measure the concentration of an analyte in the gas phase because the changes in resistance are proportional to the amount of the analyte present. The selection of the sensitive material, which interacts with the target analyte, is very important for the sensor performance. The most used VOC detection materials are metal oxides (MOx) for their rapid recovery, high sensitivity to various gas molecules, easy fabrication. Their sensing performance can be improved in terms of operating temperature, selectivity, and detection limit. Metal-organic frameworks (MOFs) have attracted a lot of attention also in the field of gas sensing due to their high porosity, high surface area, tunable morphologies, structural variety. MOFs are generated by the self-assembly of multidentate organic ligands connecting with adjacent multivalent metal nodes via strong coordination interactions, producing stable and highly ordered crystalline porous materials with well-designed structures. However, most MOFs intrinsically exhibit low electrical conductivity. To improve this property, MOFs can be combined with organic and inorganic materials in a hybrid fashion to produce composite materials or can be transformed into more stable structures. MOFs, indeed, can be employed as the precursors of metal oxides with well-designed architectures via the calcination method. The MOF-derived MOx partially preserved the original structure with high surface area and intrinsic open pores, which act as trapping centers for gas molecules, and showed a higher electrical conductivity. Core-shell heterostructures, in which the surface of a metal oxide core is completely coated by a MOF shell, forming a junction at the core-shell heterointerface, can also be synthesized. Also, nanocomposite in which MOF structures are intercalated with graphene related materials can also be produced, and the conductivity increases thanks to the high mobility of electrons of carbon materials. As MOF structures, zinc-based MOFs belonging to the ZIF family were selected in this work. Several Zn-based materials based and/or derived from MOFs were produced, structurally characterized, and arranged in a chemo resistive architecture, also exploring the potentiality of different approaches of sensing layer deposition based on PLD (pulsed laser deposition) and, in case of thermally labile materials, MAPLE (Matrix Assisted Pulsed Laser Evaporation) to enhance the adhesion to the support. The sensors were tested in a controlled humidity chamber, allowing for the possibility of varying the concentration of ethanol, a typical analyte chosen among the VOCs for a first survey. The effect of heating the chemiresistor to improve sensing performances was also explored. Future research will focus on exploring new manufacturing processes for MOF-based gas sensors with the aim to improve sensitivity, selectivity and reduce operating temperatures.

Keywords: chemiresistors, gas sensors, graphene related materials, laser deposition, MAPLE, metal-organic frameworks, metal oxides, nanocomposites, sensing performance, transduction mechanism, volatile organic compounds

Procedia PDF Downloads 62
17 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases

Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo

Abstract:

The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.

Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis

Procedia PDF Downloads 222
16 Sustainable Antimicrobial Biopolymeric Food & Biomedical Film Engineering Using Bioactive AMP-Ag+ Formulations

Authors: Eduardo Lanzagorta Garcia, Chaitra Venkatesh, Romina Pezzoli, Laura Gabriela Rodriguez Barroso, Declan Devine, Margaret E. Brennan Fournet

Abstract:

New antimicrobial interventions are urgently required to combat rising global health and medical infection challenges. Here, an innovative antimicrobial technology, providing price competitive alternatives to antibiotics and readily integratable with currently technological systems is presented. Two cutting edge antimicrobial materials, antimicrobial peptides (AMPs) and uncompromised sustained Ag+ action from triangular silver nanoplates (TSNPs) reservoirs, are merged for versatile effective antimicrobial action where current approaches fail. Antimicrobial peptides (AMPs) exist widely in nature and have recently been demonstrated for broad spectrum of activity against bacteria, viruses, and fungi. TSNP’s are highly discrete, homogenous and readily functionisable Ag+ nanoreseviors that have a proven amenability for operation within in a wide range of bio-based settings. In a design for advanced antimicrobial sustainable plastics, antimicrobial TSNPs are formulated for processing within biodegradable biopolymers. Histone H5 AMP was selected for its reported strong antimicrobial action and functionalized with the TSNP (AMP-TSNP) in a similar fashion to previously reported TSNP biofunctionalisation methods. A synergy between the propensity of biopolymers for degradation and Ag+ release combined with AMP activity provides a novel mechanism for the sustained antimicrobial action of biopolymeric thin films. Nanoplates are transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. Extrusion is used in combination with calendering rolls to create thin polymerc film where the nanoplates are embedded onto the surface. The resultant antibacterial functional films are suitable to be adapted for food packing and biomedical applications. TSNP synthesis were synthesized by adapting a previously reported seed mediated approach. TSNP synthesis was scaled up for litre scale batch production and subsequently concentrated to 43 ppm using thermally controlled H2O removal. Nanoplates were transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. This was acomplised by functionalizing the TSNP with thiol terminated polyethylene glycol and using centrifugal force to transfer them to chloroform. Polycaprolactone (PCL) and Polylactic acid (PLA) were individually processed through extrusion, TSNP and AMP-TSNP solutions were sprayed onto the polymer immediately after exiting the dye. Calendering rolls were used to disperse and incorporate TSNP and TSNP-AMP onto the surface of the extruded films. Observation of the characteristic blue colour confirms the integrity of the TSNP within the films. Antimicrobial tests were performed by incubating Gram + and Gram – strains with treated and non-treated films, to evaluate if bacterial growth was reduced due to the presence of the TSNP. The resulting films successfully incorporated TSNP and AMP-TSNP. Reduced bacterial growth was observed for both Gram + and Gram – strains for both TSNP and AMP-TSNP compared with untreated films indicating antimicrobial action. The largest growth reduction was observed for AMP-TSNP treated films demonstrating the additional antimicrobial activity due to the presence of the AMPs. The potential of this technology to impede bacterial activity in food industry and medical surfaces will forge new confidence in the battle against antibiotic resistant bacteria, serving to greatly inhibit infections and facilitate patient recovery.

Keywords: antimicrobial, biodegradable, peptide, polymer, nanoparticle

Procedia PDF Downloads 116
15 Machine Learning Based Digitalization of Validated Traditional Cognitive Tests and Their Integration to Multi-User Digital Support System for Alzheimer’s Patients

Authors: Ramazan Bakir, Gizem Kayar

Abstract:

It is known that Alzheimer and Dementia are the two most common types of Neurodegenerative diseases and their visibility is getting accelerated for the last couple of years. As the population sees older ages all over the world, researchers expect to see the rate of this acceleration much higher. However, unfortunately, there is no known pharmacological cure for both, although some help to reduce the rate of cognitive decline speed. This is why we encounter with non-pharmacological treatment and tracking methods more for the last five years. Many researchers, including well-known associations and hospitals, lean towards using non-pharmacological methods to support cognitive function and improve the patient’s life quality. As the dementia symptoms related to mind, learning, memory, speaking, problem-solving, social abilities and daily activities gradually worsen over the years, many researchers know that cognitive support should start from the very beginning of the symptoms in order to slow down the decline. At this point, life of a patient and caregiver can be improved with some daily activities and applications. These activities include but not limited to basic word puzzles, daily cleaning activities, taking notes. Later, these activities and their results should be observed carefully and it is only possible during patient/caregiver and M.D. in-person meetings in hospitals. These meetings can be quite time-consuming, exhausting and financially ineffective for hospitals, medical doctors, caregivers and especially for patients. On the other hand, digital support systems are showing positive results for all stakeholders of healthcare systems. This can be observed in countries that started Telemedicine systems. The biggest potential of our system is setting the inter-user communication up in the best possible way. In our project, we propose Machine Learning based digitalization of validated traditional cognitive tests (e.g. MOCA, Afazi, left-right hemisphere), their analyses for high-quality follow-up and communication systems for all stakeholders. R. Bakir and G. Kayar are with Gefeasoft, Inc, R&D – Software Development and Health Technologies company. Emails: ramazan, gizem @ gefeasoft.com This platform has a high potential not only for patient tracking but also for making all stakeholders feel safe through all stages. As the registered hospitals assign corresponding medical doctors to the system, these MDs are able to register their own patients and assign special tasks for each patient. With our integrated machine learning support, MDs are able to track the failure and success rates of each patient and also see general averages among similarly progressed patients. In addition, our platform also supports multi-player technology which helps patients play with their caregivers so that they feel much safer at any point they are uncomfortable. By also gamifying the daily household activities, the patients will be able to repeat their social tasks and we will provide non-pharmacological reminiscence therapy (RT – life review therapy). All collected data will be mined by our data scientists and analyzed meaningfully. In addition, we will also add gamification modules for caregivers based on Naomi Feil’s Validation Therapy. Both are behaving positively to the patient and keeping yourself mentally healthy is important for caregivers. We aim to provide a therapy system based on gamification for them, too. When this project accomplishes all the above-written tasks, patients will have the chance to do many tasks at home remotely and MDs will be able to follow them up very effectively. We propose a complete platform and the whole project is both time and cost-effective for supporting all stakeholders.

Keywords: alzheimer’s, dementia, cognitive functionality, cognitive tests, serious games, machine learning, artificial intelligence, digitalization, non-pharmacological, data analysis, telemedicine, e-health, health-tech, gamification

Procedia PDF Downloads 137
14 Introducing Global Navigation Satellite System Capabilities into IoT Field-Sensing Infrastructures for Advanced Precision Agriculture Services

Authors: Savvas Rogotis, Nikolaos Kalatzis, Stergios Dimou-Sakellariou, Nikolaos Marianos

Abstract:

As precision holds the key for the introduction of distinct benefits in agriculture (e.g., energy savings, reduced labor costs, optimal application of inputs, improved products, and yields), it steadily becomes evident that new initiatives should focus on rendering Precision Agriculture (PA) more accessible to the average farmer. PA leverages on technologies such as the Internet of Things (IoT), earth observation, robotics and positioning systems (e.g., the Global Navigation Satellite System – GNSS - as well as individual positioning systems like GPS, Glonass, Galileo) that allow: from simple data georeferencing to optimal navigation of agricultural machinery to even more complex tasks like Variable Rate Applications. An identified customer pain point is that, from one hand, typical triangulation-based positioning systems are not accurate enough (with errors up to several meters), while on the other hand, high precision positioning systems reaching centimeter-level accuracy, are very costly (up to thousands of euros). Within this paper, a Ground-Based Augmentation System (GBAS) is introduced, that can be adapted to any existing IoT field-sensing station infrastructure. The latter should cover a minimum set of requirements, and in particular, each station should operate as a fixed, obstruction-free towards the sky, energy supplying unit. Station augmentation will allow them to function in pairs with GNSS rovers following the differential GNSS base-rover paradigm. This constitutes a key innovation element for the proposed solution that encompasses differential GNSS capabilities into an IoT field-sensing infrastructure. Integrating this kind of information supports the provision of several additional PA beneficial services such as spatial mapping, route planning, and automatic field navigation of unmanned vehicles (UVs). Right at the heart of the designed system, there is a high-end GNSS toolkit with base-rover variants and Real-Time Kinematic (RTK) capabilities. The GNSS toolkit had to tackle all availability, performance, interfacing, and energy-related challenges that are faced for a real-time, low-power, and reliable in the field operation. Specifically, in terms of performance, preliminary findings exhibit a high rover positioning precision that can even reach less than 10-centimeters. As this precision is propagated to the full dataset collection, it enables tractors, UVs, Android-powered devices, and measuring units to deal with challenging real-world scenarios. The system is validated with the help of Gaiatrons, a mature network of agro-climatic telemetry stations with presence all over Greece and beyond ( > 60.000ha of agricultural land covered) that constitutes part of “gaiasense” (www.gaiasense.gr) smart farming (SF) solution. Gaiatrons constantly monitor atmospheric and soil parameters, thus, providing exact fit to operational requirements asked from modern SF infrastructures. Gaiatrons are ultra-low-cost, compact, and energy-autonomous stations with a modular design that enables the integration of advanced GNSS base station capabilities on top of them. A set of demanding pilot demonstrations has been initiated in Stimagka, Greece, an area with a diverse geomorphological landscape where grape cultivation is particularly popular. Pilot demonstrations are in the course of validating the preliminary system findings in its intended environment, tackle all technical challenges, and effectively highlight the added-value offered by the system in action.

Keywords: GNSS, GBAS, precision agriculture, RTK, smart farming

Procedia PDF Downloads 113
13 Industrial Waste to Energy Technology: Engineering Biowaste as High Potential Anode Electrode for Application in Lithium-Ion Batteries

Authors: Pejman Salimi, Sebastiano Tieuli, Somayeh Taghavi, Michela Signoretto, Remo Proietti Zaccaria

Abstract:

Increasing the growth of industrial waste due to the large quantities of production leads to numerous environmental and economic challenges, such as climate change, soil and water contamination, human disease, etc. Energy recovery of waste can be applied to produce heat or electricity. This strategy allows for the reduction of energy produced using coal or other fuels and directly reduces greenhouse gas emissions. Among different factories, leather manufacturing plays a very important role in the whole world from the socio-economic point of view. The leather industry plays a very important role in our society from a socio-economic point of view. Even though the leather industry uses a by-product from the meat industry as raw material, it is considered as an activity demanding integrated prevention and control of pollution. Along the entire process from raw skins/hides to finished leather, a huge amount of solid and water waste is generated. Solid wastes include fleshings, raw trimmings, shavings, buffing dust, etc. One of the most abundant solid wastes generated throughout leather tanning is shaving waste. Leather shaving is a mechanical process that aims at reducing the tanned skin to a specific thickness before tanning and finishing. This product consists mainly of collagen and tanning agent. At present, most of the world's leather processing is chrome-tanned based. Consequently, large amounts of chromium-containing shaving wastes need to be treated. The major concern about the management of this kind of solid waste is ascribed to chrome content, which makes the conventional disposal methods, such as landfilling and incineration, not practicable. Therefore, many efforts have been developed in recent decades to promote eco-friendly/alternative leather production and more effective waste management. Herein, shaving waste resulting from metal-free tanning technology is proposed as low-cost precursors for the preparation of carbon material as anodes for lithium-ion batteries (LIBs). In line with the philosophy of a reduced environmental impact, for preparing fully sustainable and environmentally friendly LIBs anodes, deionized water and carboxymethyl cellulose (CMC) have been used as alternatives to toxic/teratogen N-methyl-2- pyrrolidone (NMP) and to biologically hazardous Polyvinylidene fluoride (PVdF), respectively. Furthermore, going towards the reduced cost, we employed water solvent and fluoride-free bio-derived CMC binder (as an alternative to NMP and PVdF, respectively) together with LiFePO₄ (LFP) when a full cell was considered. These actions make closer to the 2030 goal of having green LIBs at 100 $ kW h⁻¹. Besides, the preparation of the water-based electrodes does not need a controlled environment and due to the higher vapour pressure of water in comparison with NMP, the water-based electrode drying is much faster. This aspect determines an important consequence, namely a reduced energy consumption for the electrode preparation. The electrode derived from leather waste demonstrated a discharge capacity of 735 mAh g⁻¹ after 1000 charge and discharge cycles at 0.5 A g⁻¹. This promising performance is ascribed to the synergistic effect of defects, interlayer spacing, heteroatoms-doped (N, O, and S), high specific surface area, and hierarchical micro/mesopore structure of the biochar. Interestingly, these features of activated biochars derived from the leather industry open the way for possible applications in other EESDs as well.

Keywords: biowaste, lithium-ion batteries, physical activation, waste management, leather industry

Procedia PDF Downloads 170
12 Adaptable Path to Net Zero Carbon: Feasibility Study of Grid-Connected Rooftop Solar PV Systems with Rooftop Rainwater Harvesting to Decrease Urban Flooding in India

Authors: Rajkumar Ghosh, Ananya Mukhopadhyay

Abstract:

India has seen enormous urbanization in recent years, resulting in increased energy consumption and water demand in its metropolitan regions. Adoption of grid-connected solar rooftop systems and rainwater collection has gained significant popularity in urban areas to address these challenges while also boosting sustainability and environmental consciousness. Grid-connected solar rooftop systems offer a long-term solution to India's growing energy needs. Solar panels are erected on the rooftops of residential and commercial buildings to generate power by utilizing the abundant solar energy available across the country. Solar rooftop systems generate clean, renewable electricity, reducing reliance on fossil fuels and lowering greenhouse gas emissions. This is compatible with India's goal of reducing its carbon footprint. Urban residents and companies can save money on electricity by generating their own and possibly selling excess power back to the grid through net metering arrangements. India gives several financial incentives (subsidies 40% for system capacity 1 kW to 3 kW) to stimulate the building of solar rooftop systems, making them an economically viable option for city dwellers. India provides subsidies up to 70% to special states such as Uttarakhand, Sikkim, Himachal Pradesh, Jammu & Kashmir, and Lakshadweep. Incorporating solar rooftops into urban infrastructure contributes to sustainable urban expansion by alleviating pressure on traditional energy sources and improving air quality. Incorporating solar rooftops into urban infrastructure contributes to sustainable urban expansion by alleviating demand on existing energy sources and improving power supply reliability. Rainwater harvesting is another key component of India's sustainable urban development. It comprises collecting and storing rainwater for use in non-potable water applications such as irrigation, toilet flushing, and groundwater recharge. Rainwater gathering 2 helps to conserve water resources by lowering the demand for freshwater sources. This technology is crucial in water-stressed areas to ensure a sustainable water supply. Excessive rainwater runoff in metropolitan areas can lead to Urban flooding. Solar PV system with Rooftop Rainwater harvesting systems absorb and channel excess rainwater, which helps to reduce flooding and waterlogging in Smart cities. Rainwater harvesting systems are inexpensive and quick to set up, making them a tempting option for city dwellers and businesses looking to save money on water. Rainwater harvesting systems are now compulsory in several Indian states for specified types of buildings (bye law, Rooftop space ≥ 300 sq. m.), ensuring widespread adoption. Finally, grid-connected solar rooftop systems and rainwater collection are important to India's long-term urban development. They not only reduce the environmental impact of urbanization, but also empower individuals and businesses to control their energy and water requirements. The G20 summit will focus on green financing, fossil fuel phaseout, and renewable energy transition. The G20 Summit in New Delhi reaffirmed India's commitment to battle climate change by doubling renewable energy capacity. To address climate change and mitigate global warming, India intends to attain 280 GW of solar renewable energy by 2030 and Net Zero carbon emissions by 2070. With continued government support and increased awareness, these strategies will help India develop a more resilient and sustainable urban future.

Keywords: grid-connected solar PV system, rooftop rainwater harvesting, urban flood, groundwater, urban flooding, net zero carbon emission

Procedia PDF Downloads 90
11 Observations on Cultural Alternative and Environmental Conservation: Populations "Delayed" and Excluded from Health and Public Hygiene Policies in Mexico (1890-1930)

Authors: Marcela Davalos Lopez

Abstract:

The history of the circulation of hygienic knowledge and the consolidation of public health in Latin American cities towards the end of the 19th century is well known. Among them, Mexico City was inserted in international politics, strengthened institutions, medical knowledge, applied parameters of modernity and built sanitary engineering works. Despite the power that this hygienist system achieved, its scope was relative: it cannot be generalized to all cities. From a comparative and contextual analysis, it will be shown that conclusions derived from modern urban historiography present, from our contemporary observations, fractures. Between 1890 and 1930, the small cities and areas surrounding the Mexican capital adapted in their own way the international and federal public health regulations. This will be shown for neighborhoods located around Mexico City and in a medium city, close to the Mexican capital (about 80 km), called Cuernavaca. While the inhabitants of the neighborhoods kept awaiting the evolutionary process and the forms that public hygiene policies were taking (because they were witnesses and affected in their territories), in Cuernavaca, the dictates came as an echo. While the capital was drained, large roads were opened, roundabouts were erected, residents were expelled, and drains, sewers, drinking water pipes, etc., were built; Cuernavaca was sheltered in other times and practices. What was this due to? Undoubtedly, the time and energy that it took politicians and the group of "scientists" to carry out these enormous works in the Mexican capital took them away from addressing the issue in remote villages. It was not until the 20th century that the federal hygiene policy began to be strengthened. Despite this, there are other factors that emphasize the particularities of each site. I would like to draw attention here to the different receptions that each town prepared on public hygiene. We will see that Cuernavaca responded to its own semi-rural culture, history, orography and functions, prolonging for much longer, for example, the use of its deep ravines as sewers. For their part, the neighborhoods surrounding the capital, although affected and excluded from hygienist policies, chose to move away from them and solve the deficiencies with their own resources (they resorted to the waste that was left from the dried lake of Mexico to continue their lake practices). All of this points to a paradox that shapes our contemporary concerns: on the one hand, the benefits derived from medical knowledge and its technological applications (in this work referring particularly to the urban health system) and, on the other, the alteration it caused in environmental settings. Places like Cuernavaca (classified by the nineteenth-century and hygienists of the first decades of the twentieth century as backward), as well as landscapes such as neighborhoods, affected by advances in sanitary engineering, keep in their memory buried practices that we observe today as possible ways to reestablish environmental balances: alternative uses of water; recycling of organic materials; local uses of fauna; various systems for breaking down excreta, and so on. In sum, what the nineteenth and first half of the twentieth centuries graduated as levels of backwardness or progress, turn out to be key information to rethink the routes of environmental conservation. When we return to the observations of the scientists, politicians and lawyers of that period, we find historically rejected cultural alterity. Populations such as Cuernavaca that, due to their history, orography and/or insufficiency of federal policies, kept different relationships with the environment, today give us clues to reorient basic elements of cities: alternative uses of water, waste of raw materials, organic or consumption of local products, among others. It is, therefore, a matter of unearthing the rejected that cries out to emerge to the surface.

Keywords: sanitary hygiene, Mexico city, cultural alterity, environmental conservation, environmental history

Procedia PDF Downloads 164
10 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management

Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li

Abstract:

Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.

Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification

Procedia PDF Downloads 250
9 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 150
8 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines

Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri

Abstract:

This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.

Keywords: wind turbines, aeroelasticity, repetitive control, periodic systems

Procedia PDF Downloads 249
7 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis

Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García

Abstract:

Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.

Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis

Procedia PDF Downloads 226
6 Open Science Philosophy, Research and Innovation

Authors: C.Ardil

Abstract:

Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.

Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data

Procedia PDF Downloads 131
5 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods

Authors: Murat Arıbaş, Uğur Özcan

Abstract:

Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.

Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.

Procedia PDF Downloads 589
4 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 42
3 Numerical Simulation of Von Karman Swirling Bioconvection Nanofluid Flow from a Deformable Rotating Disk

Authors: Ali Kadir, S. R. Mishra, M. Shamshuddin, O. Anwar Beg

Abstract:

Motivation- Rotating disk bio-reactors are fundamental to numerous medical/biochemical engineering processes including oxygen transfer, chromatography, purification and swirl-assisted pumping. The modern upsurge in biologically-enhanced engineering devices has embraced new phenomena including bioconvection of micro-organisms (photo-tactic, oxy-tactic, gyrotactic etc). The proven thermal performance superiority of nanofluids i.e. base fluids doped with engineered nanoparticles has also stimulated immense implementation in biomedical designs. Motivated by these emerging applications, we present a numerical thermofluid dynamic simulation of the transport phenomena in bioconvection nanofluid rotating disk bioreactor flow. Methodology- We study analytically and computationally the time-dependent three-dimensional viscous gyrotactic bioconvection in swirling nanofluid flow from a rotating disk configuration. The disk is also deformable i.e. able to extend (stretch) in the radial direction. Stefan blowing is included. The Buongiorno dilute nanofluid model is adopted wherein Brownian motion and thermophoresis are the dominant nanoscale effects. The primitive conservation equations for mass, radial, tangential and axial momentum, heat (energy), nanoparticle concentration and micro-organism density function are formulated in a cylindrical polar coordinate system with appropriate wall and free stream boundary conditions. A mass convective condition is also incorporated at the disk surface. Forced convection is considered i.e. buoyancy forces are neglected. This highly nonlinear, strongly coupled system of unsteady partial differential equations is normalized with the classical Von Karman and other transformations to render the boundary value problem (BVP) into an ordinary differential system which is solved with the efficient Adomian decomposition method (ADM). Validation with earlier Runge-Kutta shooting computations in the literature is also conducted. Extensive computations are presented (with the aid of MATLAB symbolic software) for radial and circumferential velocity components, temperature, nanoparticle concentration, micro-organism density number and gradients of these functions at the disk surface (radial local skin friction, local circumferential skin friction, Local Nusselt number, Local Sherwood number, motile microorganism mass transfer rate). Main Findings- Increasing radial stretching parameter decreases radial velocity and radial skin friction, reduces azimuthal velocity and skin friction, decreases local Nusselt number and motile micro-organism mass wall flux whereas it increases nano-particle local Sherwood number. Disk deceleration accelerates the radial flow, damps the azimuthal flow, decreases temperatures and thermal boundary layer thickness, depletes the nano-particle concentration magnitudes (and associated nano-particle species boundary layer thickness) and furthermore decreases the micro-organism density number and gyrotactic micro-organism species boundary layer thickness. Increasing Stefan blowing accelerates the radial flow and azimuthal (circumferential flow), elevates temperatures of the nanofluid, boosts nano-particle concentration (volume fraction) and gyrotactic micro-organism density number magnitudes whereas suction generates the reverse effects. Increasing suction effect reduces radial skin friction and azimuthal skin friction, local Nusselt number, and motile micro-organism wall mass flux whereas it enhances the nano-particle species local Sherwood number. Conclusions - Important transport characteristics are identified of relevance to real bioreactor nanotechnological systems not discussed in previous works. ADM is shown to achieve very rapid convergence and highly accurate solutions and shows excellent promise in simulating swirling multi-physical nano-bioconvection fluid dynamics problems. Furthermore, it provides an excellent complement to more general commercial computational fluid dynamics simulations.

Keywords: bio-nanofluids, rotating disk bioreactors, Von Karman swirling flow, numerical solutions

Procedia PDF Downloads 156
2 Tackling the Decontamination Challenge: Nanorecycling of Plastic Waste

Authors: Jocelyn Doucet, Jean-Philippe Laviolette, Ali Eslami

Abstract:

The end-of-life management and recycling of polymer wastes remains a key environment issue in on-going efforts to increase resource efficiency and attaining GHG emission reduction targets. Half of all the plastics ever produced were made in the last 13 years, and only about 16% of that plastic waste is collected for recycling, while 25% is incinerated, 40% is landfilled, and 19% is unmanaged and leaks in the environment and waterways. In addition to the plastic collection issue, the UN recently published a report on chemicals in plastics, which adds another layer of challenge when integrating recycled content containing toxic products into new products. To tackle these important issues, innovative solutions are required. Chemical recycling of plastics provides new complementary alternatives to the current recycled plastic market by converting waste material into a high value chemical commodity that can be reintegrated in a variety of applications, making the total market size of the output – virgin-like, high value products - larger than the market size of the input – plastic waste. Access to high-quality feedstock also remains a major obstacle, primarily due to material contamination issues. Pyrowave approaches this challenge with its innovative nano-recycling technology, which purifies polymers at the molecular level, removing undesirable contaminants and restoring the resin to its virgin state without having to depolymerise it. This breakthrough approach expands the range of plastics that can be effectively recycled, including mixed plastics with various contaminants such as lead, inorganic pigments, and flame retardants. The technology allows yields below 100ppm, and purity can be adjusted to an infinitesimal level depending on the customer's specifications. The separation of the polymer and contaminants in Pyrowave's nano-recycling process offers the unique ability to customize the solution on targeted additives and contaminants to be removed based on the difference in molecular size. This precise control enables the attainment of a final polymer purity equivalent to virgin resin. The patented process involves dissolving the contaminated material using a specially formulated solvent, purifying the mixture at the molecular level, and subsequently extracting the solvent to yield a purified polymer resin that can directly be reintegrated in new products without further treatment. Notably, this technology offers simplicity, effectiveness, and flexibility while minimizing environmental impact and preserving valuable resources in the manufacturing circuit. Pyrowave has successfully applied this nano-recycling technology to decontaminate polymers and supply purified, high-quality recycled plastics to critical industries, including food-contact compliance. The technology is low-carbon, electrified, and provides 100% traceable resins with properties identical to those of virgin resins. Additionally, the issue of low recycling rates and the limited market for traditionally hard-to-recycle plastic waste has fueled the need for new complementary alternatives. Chemical recycling, such as Pyrowave's microwave depolymerization, presents a sustainable and efficient solution by converting plastic waste into high-value commodities. By employing microwave catalytic depolymerization, Pyrowave enables a truly circular economy of plastics, particularly in treating polystyrene waste to produce virgin-like styrene monomers. This revolutionary approach boasts low energy consumption, high yields, and a reduced carbon footprint. Pyrowave offers a portfolio of sustainable, low-carbon, electric solutions to give plastic waste a second life and paves the way to the new circular economy of plastics. Here, particularly for polystyrene, we show that styrene monomer yields from Pyrowave’s polystyrene microwave depolymerization reactor is 2,2 to 1,5 times higher than that of the thermal conventional pyrolysis. In addition, we provide a detailed understanding of the microwave assisted depolymerization via analyzing the effects of microwave power, pyrolysis time, microwave receptor and temperature on the styrene product yields. Furthermore, we investigate life cycle environmental impact assessment of microwave assisted pyrolysis of polystyrene in commercial-scale production. Finally, it is worth pointing out that Pyrowave is able to treat several tons of polystyrene to produce virgin styrene monomers and manage waste/contaminated polymeric materials as well in a truly circular economy.

Keywords: nanorecycling, nanomaterials, plastic recycling, depolymerization

Procedia PDF Downloads 66
1 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance

Authors: Mina Naeini, Thomas A. Adams II

Abstract:

Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.

Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs

Procedia PDF Downloads 130