Search results for: STEP fault
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3479

Search results for: STEP fault

119 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 23
118 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 135
117 The Effects of the GAA15 (Gaelic Athletic Association 15) on Lower Extremity Injury Incidence and Neuromuscular Functional Outcomes in Collegiate Gaelic Games: A 2 Year Prospective Study

Authors: Brenagh E. Schlingermann, Clare Lodge, Paula Rankin

Abstract:

Background: Gaelic football, hurling and camogie are highly popular field games in Ireland. Research into the epidemiology of injury in Gaelic games revealed that approximately three quarters of the injuries in the games occur in the lower extremity. These injuries can have player, team and institutional impacts due to multiple factors including financial burden and time loss from competition. Research has shown it is possible to record injury data consistently with the GAA through a closed online recording system known as the GAA injury surveillance database. It has been established that determining the incidence of injury is the first step of injury prevention. The goals of this study were to create a dynamic GAA15 injury prevention programme which addressed five key components/goals; avoid positions associated with a high risk of injury, enhance flexibility, enhance strength, optimize plyometrics and address sports specific agilities. These key components are internationally recognized through the Prevent Injury, Enhance performance (PEP) programme which has proven reductions in ACL injuries by 74%. In national Gaelic games the programme is known as the GAA15 which has been devised from the principles of the PEP. No such injury prevention strategies have been published on this cohort in Gaelic games to date. This study will investigate the effects of the GAA15 on injury incidence and neuromuscular function in Gaelic games. Methods: A total of 154 players (mean age 20.32 ± 2.84) were recruited from the GAA teams within the Institute of Technology Carlow (ITC). Preseason and post season testing involved two objective screening tests; Y balance test and Three Hop Test. Practical workshops, with ongoing liaison, were provided to the coaches on the implementation of the GAA15. The programme was performed before every training session and game and the existing GAA injury surveillance database was accessed to monitor player’s injuries by the college sports rehabilitation athletic therapist. Retrospective analysis of the ITC clinic records were performed in conjunction with the database analysis as a means of tracking injuries that may have been missed. The effects of the programme were analysed by comparing the intervention groups Y balance and three hop test scores to an age/gender matched control group. Results: Year 1 results revealed significant increases in neuromuscular function as a result of the GAA15. Y Balance test scores for the intervention group increased in both the posterolateral (p=.005 and p=.001) and posteromedial reach directions (p= .001 and p=.001). A decrease in performance was determined for the three hop test (p=.039). Overall twenty-five injuries were reported during the season resulting in an injury rate of 3.00 injuries/1000hrs of participation; 1.25 injuries/1000hrs training and 4.25 injuries/1000hrs match play. Non-contact injuries accounted for 40% of the injuries sustained. Year 2 results are pending and expected April 2016. Conclusion: It is envisaged that implementation of the GAA15 will continue to reduce the risk of injury and improve neuromuscular function in collegiate Gaelic games athletes.

Keywords: GAA15, Gaelic games, injury prevention, neuromuscular training

Procedia PDF Downloads 339
116 Food Consumption and Adaptation to Climate Change: Evidence from Ghana

Authors: Frank Adusah-Poku, John Bosco Dramani, Prince Boakye Frimpong

Abstract:

Climate change is considered a principal threat to human existence and livelihood. The persistence and intensity of droughts and floods in recent years have adversely affected food production systems and value chains, making it impossible to end global hunger by 2030. Thus, this study aims to examine the effect of climate change on food consumption for both farm and non-farm households in Ghana. An important focus of the analysis is to investigate how climate change affects alternative dimensions of food security, examine the extent to which these effects vary across heterogeneous groups, and explore the channels through which climate change affects food consumption. Finally, we conducted a pilot study to understand the significance of farm and non-farm diversification measures in reducing the harmful impact of climate change on farm households. The approach of this article is to use two secondary and one primary datasets. The first secondary dataset is the Ghana Socioeconomic Panel Survey (GSPS). The GSPS is a household panel dataset collected during the period 2009 to 2019. The second dataset is monthly district rainfall and temperature gridded data from the Ghana Meteorological Agency. This data was matched to the GSPS dataset at the district level. Finally, the primary data was obtained from a survey of farm and non-farm adaptation practices used by farmers in three regions in Northern Ghana. The study employed the household fixed effects model to estimate the effect of climate change (measured by temperature and rainfall) on food consumption in Ghana. Again, it used the spatial and temporal variation in temperature and rainfall across the districts in Ghana to estimate the household-level model. Evidence of potential mechanisms through which climate change affects food consumption was explored using two steps. First, the potential mechanism variables were regressed on temperature, rainfall, and the control variables. In the second and final step, the potential mechanism variables were included as extra covariates in the first model. The results revealed that extreme average temperature and drought had caused a decrease in food consumption as well as reduced the intake of important food nutrients such as carbohydrates, protein and vitamins. The results further indicated that low rainfall increased food insecurity among households with no education compared with those with primary and secondary education. Again, non-farm activity and silos have been revealed as the transmission pathways through which the effect of climate change on farm households can be moderated. Finally, the results indicated over 90% of the small-holder farmers interviewed had no farm diversification adaptation strategies for climate change, and a little over 50% of the farmers owned unskilled or manual non-farm economic ventures. This makes it very difficult for the majority of the farmers to withstand climate-related shocks. These findings suggest that achieving the Sustainable Development Goal of Zero Hunger by 2030 needs an integrated approach, such as reducing the over-reliance on rainfed agriculture, educating farmers, and implementing non-farm interventions to improve food consumption in Ghana.

Keywords: climate change, food consumption, Ghana, non-farm activity

Procedia PDF Downloads 6
115 Increasing System Adequacy Using Integration of Pumped Storage: Renewable Energy to Reduce Thermal Power Generations Towards RE100 Target, Thailand

Authors: Mathuravech Thanaphon, Thephasit Nat

Abstract:

The Electricity Generating Authority of Thailand (EGAT) is focusing on expanding its pumped storage hydropower (PSH) capacity to increase the reliability of the system during peak demand and allow for greater integration of renewables. To achieve this requirement, Thailand will have to double its current renewable electricity production. To address the challenges of balancing supply and demand in the grid with increasing levels of RE penetration, as well as rising peak demand, EGAT has already been studying the potential for additional PSH capacity for several years to enable an increased share of RE and replace existing fossil fuel-fired generation. In addition, the role that pumped-storage hydropower would play in fulfilling multiple grid functions and renewable integration. The proposed sites for new PSH would help increase the reliability of power generation in Thailand. However, most of the electricity generation will come from RE, chiefly wind and photovoltaic, and significant additional Energy Storage capacity will be needed. In this paper, the impact of integrating the PSH system on the adequacy of renewable rich power generating systems to reduce the thermal power generating units is investigated. The variations of system adequacy indices are analyzed for different PSH-renewables capacities and storage levels. Power Development Plan 2018 rev.1 (PDP2018 rev.1), which is modified by integrating a six-new PSH system and RE planning and development aftermath in 2030, is the very challenge. The system adequacy indices through power generation are obtained using Multi-Objective Genetic Algorithm (MOGA) Optimization. MOGA is a probabilistic heuristic and stochastic algorithm that is able to find the global minima, which have the advantage that the fitness function does not necessarily require the gradient. In this sense, the method is more flexible in solving reliability optimization problems for a composite power system. The optimization with hourly time step takes years of planning horizon much larger than the weekly horizon that usually sets the scheduling studies. The objective function is to be optimized to maximize RE energy generation, minimize energy imbalances, and minimize thermal power generation using MATLAB. The PDP2018 rev.1 was set to be simulated based on its planned capacity stepping into 2030 and 2050. Therefore, the four main scenario analyses are conducted as the target of renewables share: 1) Business-As-Usual (BAU), 2) National Targets (30% RE in 2030), 3) Carbon Neutrality Targets (50% RE in 2050), and 5) 100% RE or full-decarbonization. According to the results, the generating system adequacy is significantly affected by both PSH-RE and Thermal units. When a PSH is integrated, it can provide hourly capacity to the power system as well as better allocate renewable energy generation to reduce thermal generations and improve system reliability. These results show that a significant level of reliability improvement can be obtained by PSH, especially in renewable-rich power systems.

Keywords: pumped storage hydropower, renewable energy integration, system adequacy, power development planning, RE100, multi-objective genetic algorithm

Procedia PDF Downloads 57
114 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 66
113 Additional Opportunities of Forensic Medical Identification of Dead Bodies of Unkown Persons

Authors: Saule Mussabekova

Abstract:

A number of chemical elements widely presented in the nature is seldom met in people and vice versa. This is a peculiarity of accumulation of elements in the body, and their selective use regardless of widely changed parameters of external environment. Microelemental identification of human hair and particularly dead body is a new step in the development of modern forensic medicine which needs reliable criteria while identifying the person. In the condition of technology-related pressing of large industrial cities for many years and specific for each region multiple-factor toxic effect from many industrial enterprises it’s important to assess actuality and the role of researches of human hair while assessing degree of deposition with specific pollution. Hair is highly sensitive biological indicator and allows to assess ecological situation, to perform regionalism of large territories of geological and chemical methods. Besides, monitoring of concentrations of chemical elements in the regions of Kazakhstan gives opportunity to use these data while performing forensic medical identification of dead bodies of unknown persons. Methods based on identification of chemical composition of hair with further computer processing allowed to compare received data with average values for the sex, age, and to reveal causally significant deviations. It gives an opportunity preliminary to suppose the region of residence of the person, having concentrated actions of policy for search of people who are unaccounted for. It also allows to perform purposeful legal actions for its further identification having created more optimal and strictly individual scheme of personal identity. Hair is the most suitable material for forensic researches as it has such advances as long term storage properties with no time limitations and specific equipment. Besides, quantitative analysis of micro elements is well correlated with level of pollution of the environment, reflects professional diseases and with pinpoint accuracy helps not only to diagnose region of temporary residence of the person but to establish regions of his migration as well. Peculiarities of elemental composition of human hair have been established regardless of age and sex of persons residing on definite territories of Kazakhstan. Data regarding average content of 29 chemical elements in hair of population in different regions of Kazakhstan have been systemized. Coefficients of concentration of studies elements in hair relative to average values around the region have been calculated for each region. Groups of regions with specific spectrum of elements have been emphasized; these elements are accumulated in hair in quantities exceeding average indexes. Our results have showed significant differences in concentrations of chemical elements for studies groups and showed that population of Kazakhstan is exposed to different toxic substances. It depends on emissions to atmosphere from industrial enterprises dominating in each separate region. Performed researches have showed that obtained elemental composition of human hair residing in different regions of Kazakhstan reflects technogenic spectrum of elements.

Keywords: analysis of elemental composition of hair, forensic medical research of hair, identification of unknown dead bodies, microelements

Procedia PDF Downloads 142
112 Cultivating Concentration and Flow: Evaluation of a Strategy for Mitigating Digital Distractions in University Education

Authors: Vera G. Dianova, Lori P. Montross, Charles M. Burke

Abstract:

In the digital age, the widespread and frequently excessive use of mobile phones amongst university students is recognized as a significant distractor which interferes with their ability to enter a deep state of concentration during studies and diminishes their prospects of experiencing the enjoyable and instrumental state of flow, as defined and described by psychologist M. Csikszentmihalyi. This study has targeted 50 university students with the aim of teaching them to cultivate their ability to engage in deep work and to attain the state of flow, fostering more effective and enjoyable learning experiences. Prior to the start of the intervention, all participating students completed a comprehensive survey based on a variety of validated scales assessing their inclination toward lifelong learning, frequency of flow experiences during study, frustration tolerance, sense of agency, as well as their love of learning and daily time devoted to non-academic mobile phone activities. Several days after this initial assessment, students received a 90-minute lecture on the principles of flow and deep work, accompanied by a critical discourse on the detrimental effects of excessive mobile phone usage. They were encouraged to practice deep work and strive for frequent flow states throughout the semester. Subsequently, students submitted weekly surveys, including the 10-item CORE Dispositional Flow Scale, a 3-item agency scale and furthermore disclosed their average daily hours spent on non-academic mobile phone usage. As a final step, at the end of the semester students engaged in reflective report writing, sharing their experiences and evaluating the intervention's effectiveness. They considered alterations in their love of learning, reflected on the implications of their mobile phone usage, contemplated improvements in their tolerance for boredom and perseverance in complex tasks, and pondered the concept of lifelong learning. Additionally, students assessed whether they actively took steps towards managing their recreational phone usage and towards improving their commitment to becoming lifelong learners. Employing a mixed-methods approach our study offers insights into the dynamics of concentration, flow, mobile phone usage and attitudes towards learning among undergraduate and graduate university students. The findings of this study aim to promote profound contemplation, on the part of both students and instructors, on the rapidly evolving digital-age higher education environment. In an era defined by digital and AI advancements, the ability to concentrate, to experience the state of flow, and to love learning has never been more crucial. This study underscores the significance of addressing mobile phone distractions and providing strategies for cultivating deep concentration. The insights gained can guide educators in shaping effective learning strategies for the digital age. By nurturing a love for learning and encouraging lifelong learning, educational institutions can better prepare students for a rapidly changing labor market, where adaptability and continuous learning are paramount for success in a dynamic career landscape.

Keywords: deep work, flow, higher education, lifelong learning, love of learning

Procedia PDF Downloads 68
111 Rainwater Management: A Case Study of Residential Reconstruction of Cultural Heritage Buildings in Russia

Authors: V. Vsevolozhskaia

Abstract:

Since 1990, energy-efficient development concepts have constituted both a turning point in civil engineering and a challenge for an environmentally friendly future. Energy and water currently play an essential role in the sustainable economic growth of the world in general and Russia in particular: the efficiency of the water supply system is the second most important parameter for energy consumption according to the British assessment method, while the water-energy nexus has been identified as a focus for accelerating sustainable growth and developing effective, innovative solutions. The activities considered in this study were aimed at organizing and executing the renovation of the property in residential buildings located in St. Petersburg, specifically buildings with local or federal historical heritage status under the control of the St. Petersburg Committee for the State Inspection and Protection of Historic and Cultural Monuments (KGIOP) and UNESCO. Even after reconstruction, these buildings still fall into energy efficiency class D. Russian Government Resolution No. 87 on the structure and required content of project documentation contains a section entitled ‘Measures to ensure compliance with energy efficiency and equipment requirements for buildings, structures, and constructions with energy metering devices’. Mention is made of the need to install collectors and meters, which only calculate energy, neglecting the main purpose: to make buildings more energy-efficient, potentially even energy efficiency class A. The least-explored aspects of energy-efficient technology in the Russian Federation remain the water balance and the possibility of implementing rain and meltwater collection systems. These modern technologies are used exclusively for new buildings due to a lack of government directive to create project documentation during the planning of major renovations and reconstruction that would include the collection and reuse of rainwater. Energy-efficient technology for rain and meltwater collection is currently applied only to new buildings, even though research has proved that using rainwater is safe and offers a huge step forward in terms of eco-efficiency analysis and water innovation. Where conservation is mandatory, making changes to protected sites is prohibited. In most cases, the protected site is the cultural heritage building itself, including the main walls and roof. However, the installation of a second water supply system and collection of rainwater would not affect the protected building itself. Water efficiency in St. Petersburg is currently considered only from the point of view of the installation that regulates the flow of the pipeline shutoff valves. The development of technical guidelines for the use of grey- and/or rainwater to meet the needs of residential buildings during reconstruction or renovation is not yet complete. The ideas for water treatment, collection and distribution systems presented in this study should be taken into consideration during the reconstruction or renovation of residential cultural heritage buildings under the protection of KGIOP and UNESCO. The methodology applied also has the potential to be extended to other cultural heritage sites in northern countries and lands with an average annual rainfall of over 600 mm to cover average toilet-flush needs.

Keywords: cultural heritage, energy efficiency, renovation, rainwater collection, reconstruction, water management, water supply

Procedia PDF Downloads 92
110 Interactions between Sodium Aerosols and Fission Products: A Theoretical Chemistry and Experimental Approach

Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez

Abstract:

Safety requirements for Generation IV nuclear reactor designs, especially the new generation sodium-cooled fast reactors (SFR) require a risk-informed approach to model severe accidents (SA) and their consequences in case of outside release. In SFRs, aerosols are produced during a core disruptive accident when primary system sodium is ejected into the containment and burn in contact with the air; producing sodium aerosols. One of the key aspects of safety evaluation is the in-containment sodium aerosol behavior and their interaction with fission products. The study of the effects of sodium fires is essential for safety evaluation as the fire can both thermally damage the containment vessel and cause an overpressurization risk. Besides, during the fire, airborne fission product first dissolved in the primary sodium can be aerosolized or, as it can be the case for fission products, released under the gaseous form. The objective of this work is to study the interactions between sodium aerosols and fission products (Iodine, toxic and volatile, being the primary concern). Sodium fires resulting from an SA would produce aerosols consisting of sodium peroxides, hydroxides, carbonates, and bicarbonates. In addition to being toxic (in oxide form), this aerosol will then become radioactive. If such aerosols are leaked into the environment, they can pose a danger to the ecosystem. Depending on the chemical affinity of these chemical forms with fission products, the radiological consequences of an SA leading to containment leak tightness loss will also be affected. This work is split into two phases. Firstly, a method to theoretically understand the kinetics and thermodynamics of the heterogeneous reaction between sodium aerosols and fission products: I2 and HI are proposed. Ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package are carried out to develop an understanding of the surfaces of sodium carbonate (Na2CO3) aerosols and hence provide insight on its affinity towards iodine species. A comprehensive study of I2 and HI adsorption, as well as bicarbonate formation on the calculated lowest energy surface of Na2CO3, was performed which provided adsorption energies and description of the optimized configuration of adsorbate on the stable surface. Secondly, the heterogeneous reaction between (I2)g and Na2CO3 aerosols were investigated experimentally. To study this, (I2)g was generated by heating a permeation tube containing solid I2, and, passing it through a reaction chamber containing Na2CO3 aerosol deposit. The concentration of iodine was then measured at the exit of the reaction chamber. Preliminary observations indicate that there is an effective uptake of (I2)g on Na2CO3 surface, as suggested by our theoretical chemistry calculations. This work is the first step in addressing the gaps in knowledge of in-containment and atmospheric source term which are essential aspects of safety evaluation of SFR SA. In particular, this study is aimed to determine and characterize the radiological and chemical source term. These results will then provide useful insights for the developments of new models to be implemented in integrated computer simulation tool to analyze and evaluate SFR safety designs.

Keywords: iodine adsorption, sodium aerosols, sodium cooled reactor, DFT calculations, sodium carbonate

Procedia PDF Downloads 215
109 Hydrodynamics in Wetlands of Brazilian Savanna: Electrical Tomography and Geoprocessing

Authors: Lucas M. Furlan, Cesar A. Moreira, Jepherson F. Sales, Guilherme T. Bueno, Manuel E. Ferreira, Carla V. S. Coelho, Vania Rosolen

Abstract:

Located in the western part of the State of Minas Gerais, Brazil, the study area consists of a savanna environment, represented by sedimentary plateau and a soil cover composed by lateritic and hydromorphic soils - in the latter, occurring the deferruginization and concentration of high-alumina clays, exploited as refractory material. In the hydromorphic topographic depressions (wetlands) the hydropedogical relationships are little known, but it is observed that in times of rainfall, the depressed region behaves like a natural seasonal reservoir - which suggests that the wetlands on the surface of the plateau are places of recharge of the aquifer. The aquifer recharge areas are extremely important for the sustainable social, economic and environmental development of societies. The understanding of hydrodynamics in relation to the functioning of the ferruginous and hydromorphic lateritic soils system in the savanna environment is a subject rarely explored in the literature, especially its understanding through the joint application of geoprocessing by UAV (Unmanned Aerial Vehicle) and electrical tomography. The objective of this work is to understand the hydrogeological dynamics in a wetland (with an area of 426.064 m²), in the Brazilian savanna,as well as the understanding of the subsurface architecture of hydromorphic depressions in relation to the recharge of aquifers. The wetland was compartmentalized in three different regions, according to the geoprocessing. Hydraulic conductivity studies were performed in each of these three portions. Electrical tomography was performed on 9 lines of 80 meters in length and spaced 10 meters apart (direction N45), and a line with 80 meters perpendicular to all others. With the data, it was possible to generate a 3D cube. The integrated analysis showed that the area behaves like a natural seasonal reservoir in the months of greater precipitation (December – 289mm; January – 277,9mm; February – 213,2mm), because the hydraulic conductivity is very low in all areas. In the aerial images, geotag correction of the images was performed, that is, the correction of the coordinates of the images by means of the corrected coordinates of the Positioning by Precision Point of the Brazilian Institute of Geography and Statistics (IBGE-PPP). Later, the orthomosaic and the digital surface model (DSM) were generated, which with specific geoprocessing generated the volume of water that the wetland can contain - 780,922m³ in total, 265,205m³ in the region with intermediate flooding and 49,140m³ in the central region, where a greater accumulation of water was observed. Through the electrical tomography it was possible to identify that up to the depth of 6 meters the water infiltrates vertically in the central region. From the 8 meters depth, the water encounters a more resistive layer and the infiltration begins to occur horizontally - tending to concentrate the recharge of the aquifer to the northeast and southwest of the wetland. The hydrodynamics of the area is complex and has many challenges in its understanding. The next step is to relate hydrodynamics to the evolution of the landscape, with the enrichment of high-alumina clays, and to propose a management model for the seasonal reservoir.

Keywords: electrical tomography, hydropedology, unmanned aerial vehicle, water resources management

Procedia PDF Downloads 146
108 The Measurement of City Brand Effectiveness as Methodological and Strategic Challenge: Insights from Individual Interviews with International Experts

Authors: A. Augustyn, M. Florek, M. Herezniak

Abstract:

Since the public authorities are constantly pressured by the public opinion to showcase the tangible and measurable results of their efforts, the evaluation of place brand-related activities becomes a necessity. Given the political and social character of place branding process, the legitimization of the branding efforts requires the compliance of the objectives set out in the city brand strategy with the actual needs, expectations, and aspirations of various internal stakeholders. To deliver on the diverse promises, city authorities and brand managers need to translate them into the measurable indicators against which the brand strategy effectiveness will be evaluated. In concert with these observations are the findings from branding and marketing literature with a widespread consensus that places should adopt a more systematic and holistic approach in order to ensure the performance of their brands. However, the measurement of the effectiveness of place branding remains insufficiently explored in theory, even though it is considered a significant step in the process of place brand management. Therefore, the aim of the research presented in the current paper was to collect insights on the nature of effectiveness measurement of city brand strategies and to juxtapose these findings with the theoretical assumptions formed on the basis of the state-of-the-art literature review. To this end, 15 international academic experts (out of 18 initially selected) with affiliation from ten countries (five continents), were individually interviewed. The standardized set of 19 open-ended questions was used for all the interviewees, who had been selected based on their expertise and reputation in the fields of place branding/marketing. Findings were categorized into four modules: (i) conceptualizations of city brand effectiveness, (ii) methodological issues of city brand effectiveness measurement, (iii) the nature of measurement process, (iv) articulation of key performance indicators (KPIs). Within each module, the interviewees offered diverse insights into the subject based on their academic expertise and professional activity as consultants. They proposed that there should be a twofold understanding of effectiveness. The narrow one when it is conceived as the aptitude to achieve specific goals, and the broad one in which city brand effectiveness is seen as an increase in social and economic reality of a place, which in turn poses diverse challenges for the measurement concepts and processes. Moreover, the respondents offered a variety of insights into the methodological issues, particularly about the need for customization and flexibility of the measurement systems, for the employment of interdisciplinary approach to measurement and implications resulting therefrom. Considerable emphasis was put on the inward approach to measurement, namely the necessity to monitor the resident’s evaluation of brand related activities instead of benchmarking cities against the competitive set. Other findings encompass the issues of developing appropriate KPIs for the city brand, managing the measurement process and the inclusion of diverse stakeholders to produce a sound measurement system. Furthermore, the interviewees enumerated the most frequently made mistakes in measurement mainly resulting from the misunderstanding of the nature of city brands. This research was financed by the National Science Centre, Poland, research project no. 2015/19/B/HS4/00380 Towards the categorization of place brand strategy effectiveness indicators – findings from strategic documents of Polish district cities – theoretical and empirical approach.

Keywords: city branding, effectiveness, experts’ insights, measurement

Procedia PDF Downloads 144
107 Implementing Urban Rainwater Harvesting Systems: Between Policy and Practice

Authors: Natàlia Garcia Soler, Timothy Moss

Abstract:

Despite the multiple benefits of sustainable urban drainage, as demonstrated in numerous case studies across the world, urban rainwater harvesting techniques are generally restricted to isolated model projects. The leap from niche to mainstream has, in most cities, proved an elusive goal. Why policies promoting rainwater harvesting are limited in their widespread implementation has seldom been subjected to systematic analysis. Much of the literature on the policy, planning and institutional contexts of these techniques focus either on their potential benefits or on project design, but very rarely on a critical-constructive analysis of past experiences of implementation. Moreover, the vast majority of these contributions are restricted to single-case studies. There is a dearth of knowledge with respect to, firstly, policy implementation processes and, secondly, multi-case analysis. Insights from both, the authors argue, are essential to inform more effective rainwater harvesting in cities in the future. This paper presents preliminary findings from a research project on rainwater harvesting in cities from a social science perspective that is funded by the Swedish Research Foundation (Formas). This project – UrbanRain – is examining the challenges and opportunities of mainstreaming rainwater harvesting in three European cities. The paper addresses two research questions: firstly, what lessons can be learned on suitable policy incentives and planning instruments for rainwater harvesting from a meta-analysis of the relevant international literature and, secondly, how far these lessons are reflected in a study of past and ongoing rainwater harvesting projects in a European forerunner city. This two-tier approach frames the structure of the paper. We present, first, the results of the literature analysis on policy and planning issues of urban rainwater harvesting. Here, we analyze quantitatively and qualitatively the literature of the past 15 years on this topic in terms of thematic focus, issues addressed and key findings and draw conclusions on research gaps, highlighting the need for more studies on implementation factors, actor interests, institutional adaptation and multi-level governance. In a second step we focus in on the experiences of rainwater harvesting in Berlin and present the results of a mapping exercise on a wide variety of projects implemented there over the last 30 years. Here, we develop a typology to characterize the rainwater harvesting projects in terms of policy issues (what problems and goals are targeted), project design (which kind of solutions are envisaged), project implementation (how and when they were implemented), location (whether they are in new or existing urban developments) and actors (which stakeholders are involved and how), paying particular attention to the shifting institutional framework in Berlin. Mapping and categorizing these projects is based on a combination of document analysis and expert interviews. The paper concludes by synthesizing the findings, identifying how far the goals, governance structures and instruments applied in the Berlin projects studied reflect the findings emerging from the meta-analysis of the international literature on policy and planning issues of rainwater harvesting and what implications these findings have for mainstreaming such techniques in future practice.

Keywords: institutional framework, planning, policy, project implementation, urban rainwater management

Procedia PDF Downloads 287
106 Qualitative Research on German Household Practices to Ease the Risk of Poverty

Authors: Marie Boost

Abstract:

Despite activation policies, forced personal initiative to step out of unemployment and a general prosper economic situation, poverty and financial hardship constitute a crucial role in the daily lives of many families in Germany. In 2015, ~16 million persons (20.2%) of the German population are at risk of poverty or social exclusion. This is illustrated by an unemployment rate of 13.3% in the research area, located in East Germany. Despite this high amount of persons living in vulnerable households, we know little about how they manage to stabilize their lives or even overcome poverty – apart from solely relying on welfare state benefits or entering in a stable, well-paid job. Most of them are struggling in precarious living circumstances, switching from one or several short-term, low-paid jobs into self-employment or unemployment, sometimes accompanied by welfare state benefits. Hence, insecurity and uncertain future expectation form a crucial part of their lives. Within the EU-funded project “RESCuE”, resilient practices of vulnerable households were investigated in nine European countries. Approximately, 15 expert interviews with policy makers, representatives from welfare state agencies, NGOs and charity organizations and 25 household interviews have been conducted within each country. It aims to find out more about the chances and conditions of social resilience. The research is based on the triangulation of biographical narrative interviews, followed by participatory photo interviews, asking the household members to portray their typical everyday life. The presentation is focusing on the explanatory strength of this mixed-methods approach in order to show the potential of household practices to overcome financial hardship. The methodological combination allows an in-depth analysis of the families and households everyday living circumstances, including their poverty and employment situation, whether formal and informal. Active household budgeting practices, such as saving and consumption practices are based on subsistence or Do-It-Yourself work. Especially due to the photo-interviews, the importance of inherent cultural and tacit knowledge becomes obvious as it pictures their typical practices, like cultivation and gathering fruits and vegetables or going fishing. One of the central findings is the multiple purposes of these practices. They contribute to ease financial burden through consumption reduction and strengthen social ties, as they are mostly conducted with close friends or family members. In general, non-commodified practices are found to be re-commodified and to contribute to ease financial hardship, e.g. by the use of commons, barter trade or simple mutual exchange (gift exchange). These practices can substitute external purchases and reduce expenses or even generate a small income. Mixing different income sources are found to be the most likely way out of poverty within the context of a precarious labor market. But these resilient household practices take its toll as they are highly preconditioned, and many persons put themselves into risk of overstressing themselves. Thus, the potentials and risks of resilient household practices are reflected in the presentation.

Keywords: consumption practices, labor market, qualitative research, resilience

Procedia PDF Downloads 221
105 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach

Authors: Dominik Kowitzke

Abstract:

Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.

Keywords: empty nests, environment, Germany, households, over housing

Procedia PDF Downloads 171
104 Genetic Diversity of Norovirus Strains in Outpatient Children from Rural Communities of Vhembe District, South Africa, 2014-2015

Authors: Jean Pierre Kabue, Emma Meader, Afsatou Ndama Traore, Paul R. Hunter, Natasha Potgieter

Abstract:

Norovirus is now considered the most common cause of outbreaks of nonbacterial gastroenteritis. Limited data are available for Norovirus strains in Africa, especially in rural and peri-urban areas. Despite the excessive burden of diarrhea disease in developing countries, Norovirus infections have been to date mostly reported in developed countries. There is a need to investigate intensively the role of viral agents associated with diarrhea in different settings in Africa continent. To determine the prevalence and genetic diversity of Norovirus strains circulating in the rural communities in the Limpopo Province, South Africa and investigate the genetic relationship between Norovirus strains, a cross-sectional study was performed on human stools collected from rural communities. Between July 2014 and April 2015, outpatient children under 5 years of age from rural communities of Vhembe District, South Africa, were recorded for the study. A total of 303 stool specimens were collected from those with diarrhea (n=253) and without (n=50) diarrhea. NoVs were identified using real-time one-step RT-PCR. Partial Sequence analyses were performed to genotype the strains. Phylogenetic analyses were performed to compare identified NoVs genotypes to the worldwide circulating strains. Norovirus detection rate was 41.1% (104/253) in children with diarrhea. There was no significant difference (OR=1.24; 95% CI 0.66-2.33) in Norovirus detection between symptomatic and asymptomatic children. Comparison of the median CT values for NoV in children with diarrhea and without diarrhea revealed significant statistical difference of estimated GII viral load from both groups, with a much higher viral burden in children with diarrhea. To our knowledge, this is the first study reporting on the differences in estimated viral load of GII and GI NoV positive cases and controls. GII.Pe (n=9) were the predominant genotypes followed by GII.Pe/GII.4 Sydney 2012 (n=8) suspected recombinant and GII.4 Sydney 2012 variants(n=7). Two unassigned GII.4 variants and an unusual RdRp genotype GII.P15 were found. With note, the rare GIIP15 identified in this study has a common ancestor with GIIP15 strain from Japan previously reported as GII/untypeable recombinant strain implicated in a gastroenteritis outbreak. To our knowledge, this is the first report of this unusual genotype in the African continent. Though not confirmed predictive of diarrhea disease in this study, the high detection rate of NoV is an indication of subsequent exposure of children from rural communities to enteric pathogens due to poor sanitation and hygiene practices. The results reveal that the difference between asymptomatic and symptomatic children with NoV may possibly be related to the NoV genogroups involved. The findings emphasize NoV genetic diversity and predominance of GII.Pe/GII.4 Sydney 2012, indicative of increased NoV activity. An uncommon GII.P15 and two unassigned GII.4 variants were also identified from rural settings of the Vhembe District/South Africa. NoV surveillance is required to help to inform investigations into NoV evolution, and to support vaccine development programmes in Africa.

Keywords: asymptomatic, common, outpatients, norovirus genetic diversity, sporadic gastroenteritis, South African rural communities, symptomatic

Procedia PDF Downloads 195
103 Fabrication of Antimicrobial Dental Model Using Digital Light Processing (DLP) Integrated with 3D-Bioprinting Technology

Authors: Rana Mohamed, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab

Abstract:

Background: Bio-fabrication is a multidisciplinary research field that combines several principles, fabrication techniques, and protocols from different fields. The open-source-software movement is a movement that supports the use of open-source licenses for some or all software as part of the broader notion of open collaboration. Additive manufacturing is the concept of 3D printing, where it is a manufacturing method through adding layer-by-layer using computer-aided designs (CAD). There are several types of AM system used, and they can be categorized by the type of process used. One of these AM technologies is Digital light processing (DLP) which is a 3D printing technology used to rapidly cure a photopolymer resin to create hard scaffolds. DLP uses a projected light source to cure (Harden or crosslinking) the entire layer at once. Current applications of DLP are focused on dental and medical applications. Other developments have been made in this field, leading to the revolutionary field 3D bioprinting. The open-source movement was started to spread the concept of open-source software to provide software or hardware that is cheaper, reliable, and has better quality. Objective: Modification of desktop 3D printer into 3D bio-printer and the integration of DLP technology and bio-fabrication to produce an antibacterial dental model. Method: Modification of a desktop 3D printer into a 3D bioprinter. Gelatin hydrogel and sodium alginate hydrogel were prepared with different concentrations. Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum were extracted, and extractions were selected on different levels (Powder, aqueous extracts, total oils, and Essential oils) prepared for antibacterial bioactivity. Agar well diffusion method along with the E. coli have been used to perform the sensitivity test for the antibacterial activity of the extracts acquired by Zingiber officinale, Syzygium aromaticum, and Allium sativum. Lastly, DLP printing was performed to produce several dental models with the natural extracted combined with hydrogel to represent and simulate the Hard and Soft tissues. Result: The desktop 3D printer was modified into 3D bioprinter using open-source software Marline and modified custom-made 3D printed parts. Sodium alginate hydrogel and gelatin hydrogel were prepared at 5% (w/v), 10% (w/v), and 15%(w/v). Resin integration with the natural extracts of Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum was done following the percentage 1- 3% for each extract. Finally, the Antimicrobial dental model was printed; exhibits the antimicrobial activity, followed by merging with sodium alginate hydrogel. Conclusion: The open-source movement was successful in modifying and producing a low-cost Desktop 3D Bioprinter showing the potential of further enhancement in such scope. Additionally, the potential of integrating the DLP technology with bioprinting is a promising step toward the usage of the antimicrobial activity using natural products.

Keywords: 3D printing, 3D bio-printing, DLP, hydrogel, antibacterial activity, zingiber officinale, syzygium aromaticum, allium sativum, panax ginseng, dental applications

Procedia PDF Downloads 93
102 Understanding Stock-Out of Pharmaceuticals in Timor-Leste: A Case Study in Identifying Factors Impacting on Pharmaceutical Quantification in Timor-Leste

Authors: Lourenco Camnahas, Eileen Willis, Greg Fisher, Jessie Gunson, Pascale Dettwiller, Charlene Thornton

Abstract:

Stock-out of pharmaceuticals is a common issue at all level of health services in Timor-Leste, a small post-conflict country. This lead to the research questions: what are the current methods used to quantify pharmaceutical supplies; what factors contribute to the on-going pharmaceutical stock-out? The study examined factors that influence the pharmaceutical supply chain system. Methodology: Privett and Goncalvez dependency model has been adopted for the design of the qualitative interviews. The model examines pharmaceutical supply chain management at three management levels: management of individual pharmaceutical items, health facilities, and health systems. The interviews were conducted in order to collect information on inventory management, logistics management information system (LMIS) and the provision of pharmaceuticals. Andersen' behavioural model for healthcare utilization also informed the interview schedule, specifically factors linked to environment (healthcare system and external environment) and the population (enabling factors). Forty health professionals (bureaucrats, clinicians) and six senior officers from a United Nations Agency, a global multilateral agency and a local non-governmental organization were interviewed on their perceptions of factors (healthcare system/supply chain and wider environment) impacting on stock out. Additionally, policy documents for the entire healthcare system, along with population data were collected. Findings: An analysis using Pozzebon’s critical interpretation identified a range of difficulties within the system from poor coordination to failure to adhere to policy guidelines along with major difficulties with inventory management, quantification, forecasting, and budgetary constraints. Weak logistics management information system, lack of capacity in inventory management, monitoring and supervision are additional organizational factors that also contributed to the issue. There were various methods of quantification of pharmaceuticals applied in the government sector, and non-governmental organizations. Lack of reliable data is one of the major problems in the pharmaceutical provision. Global Fund has the best quantification methods fed by consumption data and malaria cases. There are other issues that worsen stock-out: political intervention, work ethic and basic infrastructure such as unreliable internet connectivity. Major issues impacting on pharmaceutical quantification have been identified. However, current data collection identified limitations within the Andersen model; specifically, a failure to take account of predictors in the healthcare system and the environment (culture/politics/social. The next step is to (a) compare models used by three non-governmental agencies with the government model; (b) to run the Andersen explanatory model for pharmaceutical expenditure for 2 to 5 drug items used by these three development partners in order to see how it correlates with the present model in terms of quantification and forecasting the needs; (c) to repeat objectives (a) and (b) using the government model; (d) to draw a conclusion about the strength.

Keywords: inventory management, pharmaceutical forecasting and quantification, pharmaceutical stock-out, pharmaceutical supply chain management

Procedia PDF Downloads 244
101 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 219
100 Environmental Impacts Assessment of Power Generation via Biomass Gasification Systems: Life Cycle Analysis (LCA) Approach for Tars Release

Authors: Grâce Chidikofan, François Pinta, A. Benoist, G. Volle, J. Valette

Abstract:

Statement of the Problem: biomass gasification systems may be relevant for decentralized power generation from recoverable agricultural and wood residues available in rural areas. In recent years, many systems have been implemented in all over the world as especially in Cambodgia, India. Although they have many positive effects, these systems can also affect the environment and human health. Indeed, during the process of biomass gasification, black wastewater containing tars are produced and generally discharged in the local environment either into the rivers or on soil. However, in most environmental assessment studies of biomass gasification systems, the impact of these releases are underestimated, due to the difficulty of identification of their chemical substances. This work deal with the analysis of the environmental impacts of tars from wood gasification in terms of human toxicity cancer effect, human toxicity non-cancer effect, and freshwater ecotoxicity. Methodology: A Life Cycle Assessment (LCA) approach was adopted. The inventory of tars chemicals substances was based on experimental data from a downdraft gasification system. The composition of six samples from two batches of raw materials: one batch made of tree wood species (oak+ plane tree +pine) at 25 % moisture content and the second batch made of oak at 11% moisture content. The tests were carried out for different gasifier load rates, respectively in the range 50-75% and 50-100%. To choose the environmental impacts assessment method, we compared the methods available in SIMAPRO tool (8.2.0) which are taking into account most of the chemical substances. The environmental impacts for 1kg of tars discharged were characterized by ILCD 2011+ method (V.1.08). Findings Experimental results revealed 38 important chemical substances in varying proportion from one test to another. Only 30 are characterized by ILCD 2011+ method, which is one of the best performing methods. The results show that wood species or moisture content have no significant impact on human toxicity noncancer effect (HTNCE) and freshwater ecotoxicity (FWE) for water release. For human toxicity cancer effect (HTCE), a small gap is observed between impact factors of the two batches, either 3.08E-7 CTUh/kg against 6.58E-7 CTUh/kg. On the other hand, it was found that the risk of negative effects is higher in case of tar release into water than on soil for all impact categories. Indeed, considering the set of samples, the average impact factor obtained for HTNCE varies respectively from 1.64 E-7 to 1.60E-8 CTUh/kg. For HTCE, the impact factor varies between 4.83E-07 CTUh/kg and 2.43E-08 CTUh/kg. The variability of those impact factors is relatively low for these two impact categories. Concerning FWE, the variability of impact factor is very high. It is 1.3E+03 CTUe/kg for tars release into water against 2.01E+01 CTUe/kg for tars release on soil. Statement concluding: The results of this study show that the environmental impacts of tars emission of biomass gasification systems can be consequent and it is important to investigate the ways to reduce them. For environmental research, these results represent an important step of a global environmental assessment of the studied systems. It could be used to better manage the wastewater containing tars to reduce as possible the impacts of numerous still running systems all over the world.

Keywords: biomass gasification, life cycle analysis, LCA, environmental impact, tars

Procedia PDF Downloads 280
99 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size

Authors: Natalie Hoyer

Abstract:

Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.

Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests

Procedia PDF Downloads 44
98 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators

Authors: N. Naz, A. D. Domenico, M. N. Huda

Abstract:

Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.

Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator

Procedia PDF Downloads 90
97 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy

Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai

Abstract:

Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.

Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy

Procedia PDF Downloads 141
96 InAs/GaSb Superlattice Photodiode Array ns-Response

Authors: Utpal Das, Sona Das

Abstract:

InAs/GaSb type-II superlattice (T2SL) Mid-wave infrared (MWIR) focal plane arrays (FPAs) have recently seen rapid development. However, in small pixel size large format FPAs, the occurrence of high mesa sidewall surface leakage current is a major constraint necessitating proper surface passivation. A simple pixel isolation technique in InAs/GaSb T2SL detector arrays without the conventional mesa etching has been proposed to isolate the pixels by forming a more resistive higher band gap material from the SL, in the inter-pixel region. Here, a single step femtosecond (fs) laser anneal of the T2SL structure of the inter-pixel T2SL regions, have been used to increase the band gap between the pixels by QW-intermixing and hence increase isolation between the pixels. The p-i-n photodiode structure used here consists of a 506nm, (10 monolayer {ML}) InAs:Si (1x10¹⁸cm⁻³)/(10ML) GaSb SL as the bottom n-contact layer grown on an n-type GaSb substrate. The undoped absorber layer consists of 1.3µm, (10ML)InAs/(10ML)GaSb SL. The top p-contact layer is a 63nm, (10ML)InAs:Be(1x10¹⁸cm⁻³)/(10ML)GaSb T2SL. In order to improve the carrier transport, a 126nm of graded doped (10ML)InAs/(10ML)GaSb SL layer was added between the absorber and each contact layers. A 775nm 150fs-laser at a fluence of ~6mJ/cm² is used to expose the array where the pixel regions are masked by a Ti(200nm)-Au(300nm) cap. Here, in the inter-pixel regions, the p+ layer have been reactive ion etched (RIE) using CH₄+H₂ chemistry and removed before fs-laser exposure. The fs-laser anneal isolation improvement in 200-400μm pixels due to spatially selective quantum well intermixing for a blue shift of ~70meV in the inter-pixel regions is confirmed by FTIR measurements. Dark currents are measured between two adjacent pixels with the Ti(200nm)-Au(300nm) caps used as contacts. The T2SL quality in the active photodiode regions masked by the Ti-Au cap is hardly affected and retains the original quality of the detector. Although, fs-laser anneal of p+ only etched p-i-n T2SL diodes show a reduction in the reverse dark current, no significant improvement in the full RIE-etched mesa structures is noticeable. Hence for a 128x128 array fabrication of 8μm square pixels and 10µm pitch, SU8 polymer isolation after RIE pixel delineation has been used. X-n+ row contacts and Y-p+ column contacts have been used to measure the optical response of the individual pixels. The photo-response of these 8μm and other 200μm pixels under a 2ns optical pulse excitation from an Optical-Parametric-Oscillator (OPO), shows a peak responsivity of ~0.03A/W and 0.2mA/W, respectively, at λ~3.7μm. Temporal response of this detector array is seen to have a fast response ~10ns followed typical slow decay with ringing, attributed to impedance mismatch of the connecting co-axial cables. In conclusion, response times of a few ns have been measured in 8µm pixels of a 128x128 array. Although fs-laser anneal has been found to be useful in increasing the inter-pixel isolation in InAs/GaSb T2SL arrays by QW inter-mixing, it has not been found to be suitable for passivation of full RIE etched mesa structures with vertical walls on InAs/GaSb T2SL.

Keywords: band-gap blue-shift, fs-laser-anneal, InAs/GaSb T2SL, Inter-pixel isolation, ns-Response, photodiode array

Procedia PDF Downloads 151
95 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)

Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram

Abstract:

Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.

Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition

Procedia PDF Downloads 246
94 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 184
93 The Analysis of Noise Harmfulness in Public Utility Facilities

Authors: Monika Sobolewska, Aleksandra Majchrzak, Bartlomiej Chojnacki, Katarzyna Baruch, Adam Pilch

Abstract:

The main purpose of the study is to perform the measurement and analysis of noise harmfulness in public utility facilities. The World Health Organization reports that the number of people suffering from hearing impairment is constantly increasing. The most alarming is the number of young people occurring in the statistics. The majority of scientific research in the field of hearing protection and noise prevention concern industrial and road traffic noise as the source of health problems. As the result, corresponding standards and regulations defining noise level limits are enforced. However, there is another field uncovered by profound research – leisure time. Public utility facilities such as clubs, shopping malls, sport facilities or concert halls – they all generate high-level noise, being out of proper juridical control. Among European Union Member States, the highest legislative act concerning noise prevention is the Environmental Noise Directive 2002/49/EC. However, it omits the problem discussed above and even for traffic, railway and aircraft noise it does not set limits or target values, leaving these issues to the discretion of the Member State authorities. Without explicit and uniform regulations, noise level control at places designed for relaxation and entertainment is often in the responsibility of people having little knowledge of hearing protection, unaware of the risk the noise pollution poses. Exposure to high sound levels in clubs, cinemas, at concerts and sports events may result in a progressive hearing loss, especially among young people, being the main target group of such facilities and events. The first step to change this situation and to raise the general awareness is to perform reliable measurements the results of which will emphasize the significance of the problem. This project presents the results of more than hundred measurements, performed in most types of public utility facilities in Poland. As the most suitable measuring instrument for such a research, personal noise dosimeters were used to collect the data. Each measurement is presented in the form of numerical results including equivalent and peak sound pressure levels and a detailed description considering the type of the sound source, size and furnishing of the room and the subjective sound level evaluation. In the absence of a straight reference point for the interpretation of the data, the limits specified in EU Directive 2003/10/EC were used for comparison. They set the maximum sound level values for workers in relation to their working time length. The analysis of the examined problem leads to the conclusion that during leisure time, people are exposed to noise levels significantly exceeding safe values. As the hearing problems are gradually progressing, most people underplay the problem, ignoring the first symptoms. Therefore, an effort has to be made to specify the noise regulations for public utility facilities. Without any action, in the foreseeable future the majority of Europeans will be dealing with serious hearing damage, which will have a negative impact on the whole societies.

Keywords: hearing protection, noise level limits, noise prevention, noise regulations, public utility facilities

Procedia PDF Downloads 223
92 Quantifying Firm-Level Environmental Innovation Performance: Determining the Sustainability Value of Patent Portfolios

Authors: Maximilian Elsen, Frank Tietze

Abstract:

The development and diffusion of green technologies are crucial for achieving our ambitious climate targets. The Paris Agreement commits its members to develop strategies for achieving net zero greenhouse gas emissions by the second half of the century. Governments, executives, and academics are working on net-zero strategies and the business of rating organisations on their environmental, social and governance (ESG) performance has grown tremendously in its public interest. ESG data is now commonly integrated into traditional investment analysis and an important factor in investment decisions. Creating these metrics, however, is inherently challenging as environmental and social impacts are hard to measure and uniform requirements on ESG reporting are lacking. ESG metrics are often incomplete and inconsistent as they lack fully accepted reporting standards and are often of qualitative nature. This study explores the use of patent data for assessing the environmental performance of companies by focusing on their patented inventions in the space of climate change mitigation and adaptation technologies (CCMAT). The present study builds on the successful identification of CCMAT patents. In this context, the study adopts the Y02 patent classification, a fully cross-sectional tagging scheme that is fully incorporated in the Cooperative Patent Classification (CPC), to identify Climate Change Adaptation Technologies. The Y02 classification was jointly developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) and provides means to examine technologies in the field of mitigation and adaptation to climate change across relevant technologies. This paper develops sustainability-related metrics for firm-level patent portfolios. We do so by adopting a three-step approach. First, we identify relevant CCMAT patents based on their classification as Y02 CPC patents. Second, we examine the technological strength of the identified CCMAT patents by including more traditional metrics from the field of patent analytics while considering their relevance in the space of CCMAT. Such metrics include, among others, the number of forward citations a patent receives, as well as the backward citations and the size of the focal patent family. Third, we conduct our analysis on a firm level by sector for a sample of companies from different industries and compare the derived sustainability performance metrics with the firms’ environmental and financial performance based on carbon emissions and revenue data. The main outcome of this research is the development of sustainability-related metrics for firm-level environmental performance based on patent data. This research has the potential to complement existing ESG metrics from an innovation perspective by focusing on the environmental performance of companies and putting them into perspective to conventional financial performance metrics. We further provide insights into the environmental performance of companies on a sector level. This study has implications of both academic and practical nature. Academically, it contributes to the research on eco-innovation and the literature on innovation and intellectual property (IP). Practically, the study has implications for policymakers by deriving meaningful insights into the environmental performance from an innovation and IP perspective. Such metrics are further relevant for investors and potentially complement existing ESG data.

Keywords: climate change mitigation, innovation, patent portfolios, sustainability

Procedia PDF Downloads 83
91 Prospects of Low Immune Response Transplants Based on Acellular Organ Scaffolds

Authors: Inna Kornienko, Svetlana Guryeva, Anatoly Shekhter, Elena Petersen

Abstract:

Transplantation is an effective treatment option for patients suffering from different end-stage diseases. However, it is plagued by a constant shortage of donor organs and the subsequent need of a lifelong immunosuppressive therapy for the patient. Currently some researchers look towards using of pig organs to replace human organs for transplantation since the matrix derived from porcine organs is a convenient substitute for the human matrix. As an initial step to create a new ex vivo tissue engineered model, optimized protocols have been created to obtain organ-specific acellular matrices and evaluated their potential as tissue engineered scaffolds for culture of normal cells and tumor cell lines. These protocols include decellularization by perfusion in a bioreactor system and immersion-agitation on an orbital shaker with use of various detergents (SDS, Triton X-100) and freezing. Complete decellularization – in terms of residual DNA amount – is an important predictor of probability of immune rejection of materials of natural origin. However, the signs of cellular material may still remain within the matrix even after harsh decellularization protocols. In this regard, the matrices obtained from tissues of low-immunogenic pigs with α3Galactosyl-tranferase gene knock out (GalT-KO) may be a promising alternative to native animal sources. The research included a study of induced effect of frozen and fresh fragments of GalT-KO skin on healing of full-thickness plane wounds in 80 rats. Commercially available wound dressings (Ksenoderm, Hyamatrix and Alloderm) as well as allogenic skin were used as a positive control and untreated wounds were analyzed as a negative control. The results were evaluated on the 4th day after grafting, which corresponds to the time of start of normal wound epithelization. It has been shown that a non-specific immune response in models treated with GalT-Ko pig skin was milder than in all the control groups. Research has been performed to measure technical skin characteristics: stiffness and elasticity properties, corneometry, tevametry, and cutometry. These metrics enabled the evaluation of hydratation level, corneous layer husking level, as well as skin elasticity and micro- and macro-landscape. These preliminary data may contribute to development of personalized transplantable organs from GalT-Ko pigs with significantly limited potential of immune rejection. By applying growth factors to a decellularized skin sample it is possible to achieve various regenerative effects based on the particular situation. In this particular research BMP2 and Heparin-binding EGF-like growth factor have been used. Ideally, a bioengineered organ must be biocompatible, non-immunogenic and support cell growth. Porcine organs are attractive for xenotransplantation if severe immunologic concerns can be bypassed. The results indicate that genetically modified pig tissues with knock-outed α3Galactosyl-tranferase gene may be used for production of low-immunogenic matrix suitable for transplantation.

Keywords: decellularization, low-immunogenic, matrix, scaffolds, transplants

Procedia PDF Downloads 275
90 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 92