Search results for: simulation code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6147

Search results for: simulation code

57 The Role of Virtual Reality in Mediating the Vulnerability of Distant Suffering: Distance, Agency, and the Hierarchies of Human Life

Authors: Z. Xu

Abstract:

Immersive virtual reality (VR) has gained momentum in humanitarian communication due to its utopian promises of co-presence, immediacy, and transcendence. These potential benefits have led the United Nations (UN) to tirelessly produce and distribute VR series to evoke global empathy and encourage policymakers, philanthropic business tycoons and citizens around the world to actually do something (i.e. give a donation). However, it is unclear whether or not VR can cultivate cosmopolitans with a sense of social responsibility towards the geographically, socially/culturally and morally mediated misfortune of faraway others. Drawing upon existing works on the mediation of distant suffering, this article constructs an analytical framework to articulate the issue. Applying this framework on a case study of five of the UN’s VR pieces, the article identifies three paradoxes that exist between cyber-utopian and cyber-dystopian narratives. In the “paradox of distance”, VR relies on the notions of “presence” and “storyliving” to implicitly link audiences spatially and temporally to distant suffering, creating global connectivity and reducing perceived distances between audiences and others; yet it also enables audiences to fully occupy the point of view of distant sufferers (creating too close/absolute proximity), which may cause them to feel naive self-righteousness or narcissism with their pleasures and desire, thereby destroying the “proper distance”. In the “paradox of agency”, VR simulates a superficially “real” encounter for visual intimacy, thereby establishing an “audiences–beneficiary” relationship in humanitarian communication; yet in this case the mediated hyperreality is not an authentic reality, and its simulation does not fill the gap between reality and the virtual world. In the “paradox of the hierarchies of human life”, VR enables an audience to experience virtually fundamental “freedom”, epitomizing an attitude of cultural relativism that informs a great deal of contemporary multiculturalism, providing vast possibilities for a more egalitarian representation of distant sufferers; yet it also takes the spectator’s personally empathic feelings as the focus of intervention, rather than structural inequality and political exclusion (an economic and political power relations of viewing). Thus, the audience can potentially remain trapped within the minefield of hegemonic humanitarianism. This study is significant in two respects. First, it advances the turn of digitalization in studies of media and morality in the polymedia milieu; it is motivated by the necessary call for a move beyond traditional technological environments to arrive at a more novel understanding of the asymmetry of power between the safety of spectators and the vulnerability of mediated sufferers. Second, it not only reminds humanitarian journalists and NGOs that they should not rely entirely on the richer news experience or powerful response-ability enabled by VR to gain a “moral bond” with distant sufferers, but also argues that when fully-fledged VR technology is developed, it can serve as a kind of alchemy and should not be underestimated merely as a “bugaboo” of an alarmist philosophical and fictional dystopia.

Keywords: audience, cosmopolitan, distant suffering, virtual reality, humanitarian communication

Procedia PDF Downloads 142
56 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor

Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez

Abstract:

Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.

Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste

Procedia PDF Downloads 113
55 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan

Authors: Hsien-Te Lin

Abstract:

The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.

Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy

Procedia PDF Downloads 138
54 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 312
53 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 219
52 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 93
51 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher

Abstract:

The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with a burgers vector. The twinning mechanism of plastic strain is not observed in Ti-6Al-4V, therefore, it is not considered in the present modeling. Nine representative elementary volumes (REV) are generated with Voronoi tessellations. For each individual equiaxed grain, the own crystallographic orientation vis-à-vis the loading is taken into account. The meshing strategy is optimized in a way to eliminate the meshing effects and at the same time to allow calculating the individual grain size. The stress and strain fields are determined in each Gauss point of the mesh element. A post-treatment is used to calculate the local behavior (in each grain) and then by appropriate homogenization, the macroscopic behavior is calculated. The developed model is validated by comparing the numerical simulation results with an experimental data reported in the literature. It is observed that the present model is able to predict the global mechanical behavior of Ti-6Al-4V alloy and investigate the microstructural parameters' effects. According to the simulations performed on the generated volumes (REV), the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior. The average grains size influences also the Ti-6Al-4V mechanical proprieties, especially the yield stress; by decreasing of the average grains size, the yield strength increases according to Hall-Petch relationship. The grains sizes' distribution gives to the strain fields considerable heterogeneity. By increasing grain sizes, the scattering in the localization of plastic strain is observed, thus, in certain areas the stress concentrations are stronger than other regions.

Keywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy

Procedia PDF Downloads 125
50 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface

Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis

Abstract:

In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.

Keywords: polymorphic, locomotion, sinusoidal curves, parametric

Procedia PDF Downloads 104
49 Gamification of eHealth Business Cases to Enhance Rich Learning Experience

Authors: Kari Björn

Abstract:

Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.

Keywords: engineering education, integrated curriculum, learning experience, learning outcomes

Procedia PDF Downloads 239
48 The Assessment of Infiltrated Wastewater on the Efficiency of Recovery Reuse and Irrigation Scheme: North Gaza Emergency Sewage Treatment Project as a Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line and infiltration basins-IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme–RRS– to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m, and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery reuse scheme, infiltration basins, North Gaza

Procedia PDF Downloads 203
47 Solar Photovoltaic Driven Air-Conditioning for Commercial Buildings: A Case of Botswana

Authors: Taboka Motlhabane, Pradeep Sahoo

Abstract:

The global demand for cooling has grown exponentially over the past century to meet economic development and social needs, accounting for approximately 10% of the global electricity consumption. As global temperatures continue to rise, the demand for cooling and heating, ventilation and air-conditioning (HVAC) equipment is set to rise with it. The increased use of HVAC equipment has significantly contributed to the growth of greenhouse gas (GHG) emissions which aid the climate crisis- one of the biggest challenges faced by the current generation. The need to address emissions caused directly by HVAC equipment and electricity generated to meet the cooling or heating demand is ever more pressing. Currently, developed countries account for the largest cooling and heating demand, however developing countries are anticipated to experience a huge increase in population growth in 10 years, resulting in a shift in energy demand. Developing countries, which are projected to account for nearly 60% of the world's GDP by 2030, are rapidly building infrastructure and economies to meet their growing needs and meet these projections. Cooling, a very energy-intensive process that can account for 20 % to 75% of a building's energy, depending on the building's use. Solar photovoltaic (PV) driven air-conditioning offers a great cost-effective alternative for adoption in both residential and non-residential buildings to offset grid electricity, particularly in countries with high irradiation, such as Botswana. This research paper explores the potential of a grid-connected solar photovoltaic vapor-compression air-conditioning system for the Peter-Smith herbarium at the Okavango Research Institute (ORI) University of Botswana campus in Maun, Botswana. The herbarium plays a critical role in the collection and preservation of botanical data, dating back over 100 years, with pristine collection from the Okavango Delta, a UNESCO world heritage site and serves as a reference and research site. Due to the herbarium’s specific needs, it operates throughout the day and year in an attempt to maintain a constant herbarium temperature of 16°?. The herbarium model studied simulates a variable-air-volume HVAC system with a system rating of 30 kW. Simulation results show that the HVAC system accounts for 68.9% of the building's total electricity at 296 509.60 kWh annually. To offset the grid electricity, a 175.1 kWp nominal power rated PV system requiring 416 modules to match the required power, covering an area of 928 m2 is used to meet the HVAC system annual needs. An economic assessment using PVsyst found that for an installation priced with average solar PV prices in Botswana totalled to be 787 090.00 BWP, with annual operating costs of 30 500 BWP/year. With self-project financing, the project is estimated to have recouped its initial investment within 6.7 years. At an estimated project lifetime of 20 years, the Net Present Value is projected at 1 565 687.00 BWP with a ROI of 198.9%, with 74 070.67 tons of CO2 saved at the end of the project lifetime. This study investigates the performance of the HVAC system to meet the indoor air comfort requirements, the annual PV system performance, and the building model has been simulated using DesignBuilder Software.

Keywords: vapor compression refrigeration, solar cooling, renewable energy, herbarium

Procedia PDF Downloads 123
46 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation

Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira

Abstract:

The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.

Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy

Procedia PDF Downloads 126
45 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 187
44 Investigation of Pu-238 Heat Source Modifications to Increase Power Output through (α,N) Reaction-Induced Fission

Authors: Alex B. Cusick

Abstract:

The objective of this study is to improve upon the current ²³⁸PuO₂ fuel technology for space and defense applications. Modern RTGs (radioisotope thermoelectric generators) utilize the heat generated from the radioactive decay of ²³⁸Pu to create heat and electricity for long term and remote missions. Application of RTG technology is limited by the scarcity and expense of producing the isotope, as well as the power output which is limited to only a few hundred watts. The scarcity and expense make the efficient use of ²³⁸Pu absolutely necessary. By utilizing the decay of ²³⁸Pu, not only to produce heat directly but to also indirectly induce fission in ²³⁹Pu (which is already present within currently used fuel), it is possible to see large increases in temperature which allows for a more efficient conversion to electricity and a higher power-to-weight ratio. This concept can reduce the quantity of ²³⁸Pu necessary for these missions, potentially saving millions on investment, while yielding higher power output. Current work investigating radioisotope power systems have focused on improving efficiency of the thermoelectric components and replacing systems which produce heat by virtue of natural decay with fission reactors. The technical feasibility of utilizing (α,n) reactions to induce fission within current radioisotopic fuels has not been investigated in any appreciable detail, and our study aims to thoroughly investigate the performance of many such designs, develop those with highest capabilities, and facilitate experimental testing of these designs. In order to determine the specific design parameters that maximize power output and the efficient use of ²³⁸Pu for future RTG units, MCNP6 simulations have been used to characterize the effects of modifying fuel composition, geometry, and porosity, as well as introducing neutron moderating, reflecting, and shielding materials to the system. Although this project is currently in the preliminary stages, the final deliverables will include sophisticated designs and simulation models that define all characteristics of multiple novel RTG fuels, detailed enough to allow immediate fabrication and testing. Preliminary work has consisted of developing a benchmark model to accurately represent the ²³⁸PuO₂ pellets currently in use by NASA; this model utilizes the alpha transport capabilities of MCNP6 and agrees well with experimental data. In addition, several models have been developed by varying specific parameters to investigate their effect on (α,n) and (n,fi ssion) reaction rates. Current practices in fuel processing are to exchange out the small portion of naturally occurring ¹⁸O and ¹⁷O to limit (α,n) reactions and avoid unnecessary neutron production. However, we have shown that enriching the oxide in ¹⁸O introduces a sufficient (α,n) reaction rate to support significant fission rates. For example, subcritical fission rates above 10⁸ f/cm³-s are easily achievable in cylindrical ²³⁸PuO₂ fuel pellets with a ¹⁸O enrichment of 100%, given an increase in size and a ⁹Be clad. Many viable designs exist and our intent is to discuss current results and future endeavors on this project.

Keywords: radioisotope thermoelectric generators (RTG), Pu-238, subcritical reactors, (alpha, n) reactions

Procedia PDF Downloads 170
43 Finite Element Modelling and Optimization of Post-Machining Distortion for Large Aerospace Monolithic Components

Authors: Bin Shi, Mouhab Meshreki, Grégoire Bazin, Helmi Attia

Abstract:

Large monolithic components are widely used in the aerospace industry in order to reduce airplane weight. Milling is an important operation in manufacturing of the monolithic parts. More than 90% of the material could be removed in the milling operation to obtain the final shape. This results in low rigidity and post-machining distortion. The post-machining distortion is the deviation of the final shape from the original design after releasing the clamps. It is a major challenge in machining of the monolithic parts, which costs billions of economic losses every year. Three sources are directly related to the part distortion, including initial residual stresses (RS) generated from previous manufacturing processes, machining-induced RS and thermal load generated during machining. A finite element model was developed to simulate a milling process and predicate the post-machining distortion. In this study, a rolled-aluminum plate AA7175 with a thickness of 60 mm was used for the raw block. The initial residual stress distribution in the block was measured using a layer-removal method. A stress-mapping technique was developed to implement the initial stress distribution into the part. It is demonstrated that this technique significantly accelerates the simulation time. Machining-induced residual stresses on the machined surface were measured using MTS3000 hole-drilling strain-gauge system. The measured RS was applied on the machined surface of a plate to predict the distortion. The predicted distortion was compared with experimental results. It is found that the effect of the machining-induced residual stress on the distortion of a thick plate is very limited. The distortion can be ignored if the wall thickness is larger than a certain value. The RS generated from the thermal load during machining is another important factor causing part distortion. Very limited number of research on this topic was reported in literature. A coupled thermo-mechanical FE model was developed to evaluate the thermal effect on the plastic deformation of a plate. A moving heat source with a feed rate was used to simulate the dynamic cutting heat in a milling process. When the heat source passed the part surface, a small layer was removed to simulate the cutting operation. The results show that for different feed rates and plate thicknesses, the plastic deformation/distortion occurs only if the temperature exceeds a critical level. It was found that the initial residual stress has a major contribution to the part distortion. The machining-induced stress has limited influence on the distortion for thin-wall structure when the wall thickness is larger than a certain value. The thermal load can also generate part distortion when the cutting temperature is above a critical level. The developed numerical model was employed to predict the distortion of a frame part with complex structures. The predictions were compared with the experimental measurements, showing both are in good agreement. Through optimization of the position of the part inside the raw plate using the developed numerical models, the part distortion can be significantly reduced by 50%.

Keywords: modelling, monolithic parts, optimization, post-machining distortion, residual stresses

Procedia PDF Downloads 51
42 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy

Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon

Abstract:

Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).

Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect

Procedia PDF Downloads 181
41 Ragging and Sludging Measurement in Membrane Bioreactors

Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd

Abstract:

Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.

Keywords: clogging, membrane bioreactors, ragging, sludge

Procedia PDF Downloads 177
40 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 88
39 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 127
38 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field

Authors: Yana Snegireva

Abstract:

Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.

Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model

Procedia PDF Downloads 75
37 Wind Direction and Its Linkage with Vibrio cholerae Dissemination

Authors: Shlomit Paz, Meir Broza

Abstract:

Cholera is an acute intestinal infection caused by ingestion of food or water contaminated with the bacterium Vibrio cholerae. It has a short incubation period and produces an enterotoxin that causes copious, painless, watery diarrhoea that can quickly lead to severe dehydration and death if treatment is not promptly given. In an epidemic, the source of the contamination is usually the feces of an infected person. The disease can spread rapidly in areas with poor treatment of sewage and drinking water. Cholera remains a global threat and is one of the key indicators of social development. An estimated 3-5 million cases and over 100,000 deaths occur each year around the world. The relevance of climatic events as causative factors for cholera epidemics is well known. However, the examination of the involvement of winds in intra-continental disease distribution is new. The study explore the hypothesis that the spreading of cholera epidemics may be related to the dominant wind direction over land by presenting the influence of the wind direction on windborn dissemination by flying insects, which may serve as vectors. Chironomids ("non-biting midges“) exist in the majority of freshwater aquatic habitats, especially in estuarine and organic-rich water bodies typical to Vibrio cholerae. Chironomid adults emerge into the air for mating and dispersion. They are highly mobile, huge in number and found frequently in the air at various elevations. The huge number of chironomid egg masses attached to hard substrate on the water surface, serve as a reservoir for the free-living Vibrio bacteria. Both male and female, while emerging from the water, may carry the cholera bacteria. In experimental simulation, it was demonstrated that the cholera-bearing adult midges are carried by the wind, and transmit the bacteria from one body of water to another. In our previous study, the geographic diffusions of three cholera outbreaks were examined through their linkage with the wind direction: a) the progress of Vibrio cholerae O1 biotype El Tor in Africa during 1970–1971 and b) again in 2005–2006; and c) the rapid spread of Vibrio cholerae O139 over India during 1992–1993. Using data and map of cholera dissemination (WHO database) and mean monthly SLP and geopotential data (NOAA NCEP-NCAR database), analysis of air pressure data at sea level and at several altitudes over Africa, India and Bangladesh show a correspondence between the dominant wind direction and the intra-continental spread of cholera. The results support the hypothesis that aeroplankton (the tiny life forms that float in the air and that may be caught and carried upward by the wind, landing far from their origin) carry the cholera bacteria from one body of water to an adjacent one. In addition to these findings, the current follow-up study will present new results regarding the possible involvement of winds in the spreading of cholera in recent outbreaks (2010-2013). The findings may improve the understanding of how climatic factors are involved in the rapid distribution of new strains throughout a vast continental area. Awareness of the aerial transfer of Vibrio cholerae may assist health authorities by improving the prediction of the disease’s geographic dissemination.

Keywords: cholera, Vibrio cholerae, wind direction, Vibrio cholerae dissemination

Procedia PDF Downloads 366
36 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education

Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant

Abstract:

In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.

Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs

Procedia PDF Downloads 138
35 Crack Size and Moisture Issues in Thermally Modified vs. Native Norway Spruce Window Frames: A Hygrothermal Simulation Study

Authors: Gregor Vidmar, Rožle Repič, Boštjan Lesar, Miha Humar

Abstract:

The study investigates the impact of cracks in surface coatings on moisture content (MC) and related fungal growth in window frames made of thermally modified (TM) and native Norway spruce using hygrothermal simulations for Ljubljana, Slovenia. Comprehensive validation against field test data confirmed the numerical model's predictions, demonstrating similar trends in MC changes over the investigated four years. Various established mould growth models (isopleth, VTT, bio hygrothermal) did not appropriately reflect differences between the spruce types because they do not consider material moisture content, leading to the main conclusion that TM spruce is more resistant to moisture-related issues. Wood's MC influences fungal decomposition, typically occurring above 25% - 30% MC, with some fungi growing at lower MC under conducive conditions. Surface coatings cannot wholly prevent water penetration, which becomes significant when the coating is damaged. This study investigates the detrimental effects of surface coating cracks on wood moisture absorption, comparing TM spruce and native spruce window frames. Simulations were conducted for undamaged and damaged coatings (from 1 mm to 9 mm wide cracks) on window profiles as well as for uncoated profiles. Sorption curves were also measured up to 95% of the relative humidity. MC was measured in the frames exposed to actual climatic conditions and compared to simulated data for model validation. The study utilizes a simplified model of the bottom frame part due to convergence issues with simulations of the whole frame. TM spruce showed about 4% lower MC content compared to native spruce. Simulations showed that a 3 mm wide crack in native spruce coatings for the north orientation poses significant moisture risks, while a 9 mm wide crack in TM spruce coatings remains acceptable furthermore in the case of uncoated TM spruce could be acceptable. In addition, it seems that large enough cracks may cause even worse moisture dynamics compared to uncoated native spruce profiles. The absorption curve comes out to be the far most influential parameter, and the next one is density. Existing mould growth models need to be upgraded to reflect wood material differences accurately. Due to the lower sorption curve of TM spruce, in reality, higher RH values are obtained under the same boundary conditions, which implies a more critical situation according to these mould growth models. Still, it does not reflect the difference in materials, especially under external exposure conditions. Even if different substrate categories in the isopleth and bio-hygrothermal model or different sensitivity material classes for standard and TM wood are used, it does not necessarily change the expected trends; thus, models with MC being the inherent part of the models should be introduced. Orientation plays a crucial role in moisture dynamics. Results show that for similar moisture dynamics, for Norway spruce, the crack could be about 2 mm wider on the south than on the north side. In contrast, for TM spruce, orientation isn't as important, compared to other material properties. The study confirms the enhanced suitability of TM spruce for window frames in terms of moisture resistance and crack tolerance in surface coatings.

Keywords: hygrothermal simulations, mould growth, surface coating, thermally modified wood, window frame

Procedia PDF Downloads 30
34 Urban Dynamics Modelling of Mixed Land Use for Sustainable Urban Development in Indian Context

Authors: Rewati Raman, Uttam K. Roy

Abstract:

One of the main adversaries of city planning in present times is the ever-expanding problem of urbanization and the antagonistic issues accompanying it. The prevalent challenges in urbanization such as population growth, urban sprawl, poverty, inequality, pollution, congestion, etc. call for reforms in the urban fabric as well as in planning theory and practice. One of the various paradigms of city planning, land use planning, has been the major instruments for spatial planning of cities and regions in India. Zoning regulation based land use planning in the form of land use and development control plans (LUDCP) and development control regulations (DCR) have been considered mainstream guiding principles in land use planning for decades. In spite of many advantages of such zoning based regulations, over a period of time, it has been critiqued by scholars for its own limitations of isolation and lack of vitality, inconvenience in business in terms of proximity to residence and low operating cost, unsuitable environment for small investments, higher travel distance for facilities, amenities and thereby higher expenditure, safety issues etc. Mixed land use has been advocated as a tool to avoid such limitations in city planning by researchers. In addition, mixed land use can offer many advantages like housing variety and density, the creation of an economic blend of compatible land use, compact development, stronger neighborhood character, walkability, and generation of jobs, etc. Alternatively, the mixed land use beyond a suitable balance of use can also bring disadvantages like traffic congestion, encroachments, very high-density housing leading to a slum like condition, parking spill out, non-residential uses operating on residential premises paying less tax, chaos hampering residential privacy, pressure on existing infrastructure facilities, etc. This research aims at studying and outlining the various challenges and potentials of mixed land use zoning, through modeling tools, as a competent instrument for city planning in lieu of the present urban scenario. The methodology of research adopted in this paper involves the study of a mixed land use neighborhood in India, identification of indicators and parameters related to its extent and spatial pattern and the subsequent use of system dynamics as a modeling tool for simulation. The findings from this analysis helped in identifying the various advantages and challenges associated with the dynamic nature of a mixed use urban settlement. The results also confirmed the hypothesis that mixed use neighborhoods are catalysts for employment generation, socioeconomic gains while improving vibrancy, health, safety, and security. It is also seen that certain challenges related to chaos, lack of privacy and pollution prevail in mixed use neighborhoods, which can be mitigated by varying the percentage of mixing as per need, ensuring compatibility of adjoining use, institutional interventions in the form of policies, neighborhood micro-climatic interventions, etc. Therefore this paper gives a consolidated and holistic framework and quantified outcome pertaining to the extent and spatial pattern of mixed land use that should be adopted to ensure sustainable urban planning.

Keywords: mixed land use, sustainable development, system dynamics analysis, urban dynamics modelling

Procedia PDF Downloads 175
33 How Can Personal Protective Equipment Be Best Used and Reused: A Human Factors based Look at Donning and Doffing Procedures

Authors: Devin Doos, Ashley Hughes, Trang Pham, Paul Barach, Rami Ahmed

Abstract:

Over 115,000 Health Care Workers (HCWs) have died from COVID-19, and millions have been infected while caring for patients. HCWs have filed thousands of safety complaints surrounding safety concerns due to Personal Protective Equipment (PPE) shortages, which included concerns around inadequate and PPE reuse. Protocols for donning and doffing PPE remain ambiguous, lacking an evidence-base, and often result in wide deviations in practice. PPE donning and doffing protocol deviations commonly result in self-contamination but have not been thoroughly addressed. No evidence-driven protocols provide guidance on protecting HCW during periods of PPE reuse. Objective: The aim of this study was to examine safety-related threats and risks to Health Care Workers (HCWs) due to the reuse of PPE among Emergency Department personnel. Method: We conducted a prospective observational study to examine the risks of reusing PPE. First, ED personnel were asked to don and doff PPE in a simulation lab. Each participant was asked to don and doff PPE five times, according to the maximum reuse recommendation set by the Centers for Disease Control and Prevention (CDC). Each participant was videorecorded; video recordings were reviewed and coded independently by at least 2 of the 3trained coders for safety behaviors and riskiness of actions. A third coder was brought in when the agreement between the 2 coders could not be reached. Agreement between coders was high (81.9%), and all disagreements (100%) were resolved via consensus. A bowtie risk assessment chart was constructed analyzing the factors that contribute to increased risks HCW are faced with due to PPE use and reuse. Agreement amongst content experts in the field of Emergency Medicine, Human Factors, and Anesthesiology was used to select aspects of health care that both contribute and mitigate risks associated with PPE reuse. Findings: Twenty-eight clinician participants completed five rounds of donning/doffing PPE, yielding 140 PPE donning/doffing sequences. Two emerging threats were associated with behaviors in donning, doffing, and re-using PPE: (i) direct exposure to contaminant, and (ii) transmission/spread of contaminant. Protective behaviors included: hand hygiene, not touching the patient-facing surface of PPE, and ensuring a proper fit and closure of all PPE materials. 100% of participants (n= 28) deviated from the CDC recommended order, and most participants (92.85%, n=26) self-contaminated at least once during reuse. Other frequent errors included failure to tie all ties on the PPE (92.85%, n=26) and failure to wash hands after a contamination event occurred (39.28%, n=11). Conclusions: There is wide variation and regular errors in how HCW don and doffPPE while including in reusing PPE that led to self-contamination. Some errors were deemed “recoverable”, such as hand washing after touching a patient-facing surface to remove the contaminant. Other errors, such as using a contaminated mask and accidentally spreading to the neck and face, can lead to compound risks that are unique to repeated PPE use. A more comprehensive understanding of the contributing threats to HCW safety and complete approach to mitigating underlying risks, including visualizing with risk management toolsmay, aid future PPE designand workflow and space solutions.

Keywords: bowtie analysis, health care, PPE reuse, risk management

Procedia PDF Downloads 89
32 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study

Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi

Abstract:

Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.

Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload

Procedia PDF Downloads 273
31 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 187
30 Bio-Inspired Information Complexity Management: From Ant Colony to Construction Firm

Authors: Hamza Saeed, Khurram Iqbal Ahmad Khan

Abstract:

Effective information management is crucial for any construction project and its success. Primary areas of information generation are either the construction site or the design office. There are different types of information required at different stages of construction involving various stakeholders creating complexity. There is a need for effective management of information flows to reduce uncertainty creating complexity. Nature provides a unique perspective in terms of dealing with complexity, in particular, information complexity. System dynamics methodology provides tools and techniques to address complexity. It involves modeling and simulation techniques that help address complexity. Nature has been dealing with complex systems since its creation 4.5 billion years ago. It has perfected its system by evolution, resilience towards sudden changes, and extinction of unadaptable and outdated species that are no longer fit for the environment. Nature has been accommodating the changing factors and handling complexity forever. Humans have started to look at their natural counterparts for inspiration and solutions for their problems. This brings forth the possibility of using a biomimetics approach to improve the management practices used in the construction sector. Ants inhabit different habitats. Cataglyphis and Pogonomyrmex live in deserts, Leafcutter ants reside in rainforests, and Pharaoh ants are native to urban developments of tropical areas. Detailed studies have been done on fifty species out of fourteen thousand discovered. They provide the opportunity to study the interactions in diverse environments to generate collective behavior. Animals evolve to better adapt to their environment. The collective behavior of ants emerges from feedback through interactions among individuals, based on a combination of three basic factors: The patchiness of resources in time and space, operating cost, environmental stability, and the threat of rupture. If resources appear in patches through time and space, the response is accelerating and non-linear, and if resources are scattered, the response follows a linear pattern. If the acquisition of energy through food is faster than energy spent to get it, the default is to continue with an activity unless it is halted for some reason. If the energy spent is rather higher than getting it, the default changes to stay put unless activated. Finally, if the environment is stable and the threat of rupture is low, the activation and amplification rate is slow but steady. Otherwise, it is fast and sporadic. To further study the effects and to eliminate the environmental bias, the behavior of four different ant species were studied, namely Red Harvester ants (Pogonomyrmex Barbatus), Argentine ants (Linepithema Humile), Turtle ants (Cephalotes Goniodontus), Leafcutter ants (Genus: Atta). This study aims to improve the information system in the construction sector by providing a guideline inspired by nature with a systems-thinking approach, using system dynamics as a tool. Identified factors and their interdependencies were analyzed in the form of a causal loop diagram (CLD), and construction industry professionals were interviewed based on the developed CLD, which was validated with significance response. These factors and interdependencies in the natural system corresponds with the man-made systems, providing a guideline for effective use and flow of information.

Keywords: biomimetics, complex systems, construction management, information management, system dynamics

Procedia PDF Downloads 136
29 Energy Audit and Renovation Scenarios for a Historical Building in Rome: A Pilot Case Towards the Zero Emission Building Goal

Authors: Domenico Palladino, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Silvia Di Turi

Abstract:

The aim to achieve a fully decarbonized building stock by 2050 stands as one of the most challenging issues within the spectrum of energy and climate objectives. Numerous strategies are imperative, particularly emphasizing the reduction and optimization of energy demand. Ensuring the high energy performance of buildings emerges as a top priority, with measures aimed at cutting energy consumptions. Concurrently, it is imperative to decrease greenhouse gas emissions by using renewable energy sources for the on-site energy production, thereby striving for an energy balance leading towards zero-emission buildings. Italy's predominant building stock comprises ancient buildings, many of which hold historical significance and are subject to stringent preservation and conservation regulations. Attaining high levels of energy efficiency and reducing CO2 emissions in such buildings poses a considerable challenge, given their unique characteristics and the imperative to adhere to principles of conservation and restoration. Additionally, conducting a meticulous analysis of these buildings' current state is crucial for accurately quantifying their energy performance and predicting the potential impacts of proposed renovation strategies on energy consumption reduction. Within this framework, the paper presents a pilot case in Rome, outlining a methodological approach for the renovation of historic buildings towards achieving Zero Emission Building (ZEB) objective. The building has a mixed function with offices, a conference hall, and an exposition area. The building envelope is made of historical and precious materials used as cladding which must be preserved. A thorough understanding of the building's current condition serves as a prerequisite for analyzing its energy performance. This involves conducting comprehensive archival research, undertaking on-site diagnostic examinations to characterize the building envelope and its systems, and evaluating actual energy usage data derived from energy bills. Energy simulations and audit are the first step in the analysis with the assessment of the energy performance of the actual current state. Subsequently, different renovation scenarios are proposed, encompassing advanced building techniques, to pinpoint the key actions necessary for improving mechanical systems, automation and control systems, and the integration of renewable energy production. These scenarios entail different levels of renovation, ranging from meeting minimum energy performance goals to achieving the highest possible energy efficiency level. The proposed interventions are meticulously analyzed and compared to ascertain the feasibility of attaining the Zero Emission Building objective. In conclusion, the paper provides valuable insights that can be extrapolated to inform a broader approach towards energy-efficient refurbishment of historical buildings that may have limited potential for renovation in their building envelopes. By adopting a methodical and nuanced approach, it is possible to reconcile the imperative of preserving cultural heritage with the pressing need to transition towards a sustainable, low-carbon future.

Keywords: energy conservation and transition, energy efficiency in historical buildings, buildings energy performance, energy retrofitting, zero emission buildings, energy simulation

Procedia PDF Downloads 65
28 Skin-to-Skin Contact Simulation: Improving Health Outcomes for Medically Fragile Newborns in the Neonatal Intensive Care Unit

Authors: Gabriella Zarlenga, Martha L. Hall

Abstract:

Introduction: Premature infants are at risk for neurodevelopmental deficits and hospital readmissions, which can increase the financial burden on the health care system and families. Kangaroo care (skin-to-skin contact) is a practice that can improve preterm infant health outcomes. Preterm infants can acquire adequate body temperature, heartbeat, and breathing regulation through lying directly on the mother’s abdomen and in between her breasts. Due to some infant’s condition, kangaroo care is not a feasible intervention. The purpose of this proof-of-concept research project is to create a device which simulates skin-to-skin contact for pre-term infants not eligible for kangaroo care, with the aim of promoting baby’s health outcomes, reducing the incidence of serious neonatal and early childhood illnesses, and/or improving cognitive, social and emotional aspects of development. Methods: The study design is a proof-of-concept based on a three-phase approach; (1) observational study and data analysis of the standard of care for 2 groups of pre-term infants, (2) design and concept development of a novel device for pre-term infants not currently eligible for standard kangaroo care, and (3) prototyping, laboratory testing, and evaluation of the novel device in comparison to current assessment parameters of kangaroo care. A single center study will be conducted in an area hospital offering Level III neonatal intensive care. Eligible participants include newborns born premature (28-30 weeks of age) admitted to the NICU. The study design includes 2 groups: a control group receiving standard kangaroo care and an experimental group not eligible for kangaroo care. Based on behavioral analysis of observational video data collected in the NICU, the device will be created to simulate mother’s body using electrical components in a thermoplastic polymer housing covered in silicone. It will be designed with a microprocessor that controls simulated respiration, heartbeat, and body temperature of the 'simulated caregiver' by using a pneumatic lung, vibration sensors (heartbeat), pressure sensors (weight/position), and resistive film to measure temperature. A slight contour of the simulator surface may be integrated to help position the infant correctly. Control and monitoring of the skin-to-skin contact simulator would be performed locally by an integrated touchscreen. The unit would have built-in Wi-Fi connectivity as well as an optional Bluetooth connection in which the respiration and heart rate could be synced with a parent or caregiver. A camera would be integrated, allowing a video stream of the infant in the simulator to be streamed to a monitoring location. Findings: Expected outcomes are stabilization of respiratory and cardiac rates, thermoregulation of those infants not eligible for skin to skin contact with their mothers, and real time mother Bluetooth to the device to mimic the experience in the womb. Results of this study will benefit clinical practice by creating a new standard of care for premature neonates in the NICU that are deprived of skin to skin contact due to various health restrictions.

Keywords: kangaroo care, wearable technology, pre-term infants, medical design

Procedia PDF Downloads 155