Search results for: gravitational water vortex power plant
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16685

Search results for: gravitational water vortex power plant

665 Degradation Kinetics of Cardiovascular Implants Employing Full Blood and Extra-Corporeal Circulation Principles: Mimicking the Human Circulation In vitro

Authors: Sara R. Knigge, Sugat R. Tuladhar, Hans-Klaus HöFfler, Tobias Schilling, Tim Kaufeld, Axel Haverich

Abstract:

Tissue engineered (TE) heart valves based on degradable electrospun fiber scaffold represent a promising approach to overcome the known limitations of mechanical or biological prostheses. But the mechanical stress in the high-pressure system of the human circulation is a severe challenge for the delicate materials. Hence, the prediction of the scaffolds` in vivo degradation kinetics must be as accurate as possible to prevent fatal events in future animal or even clinical trials. Therefore, this study investigates whether long-term testing in full blood provides more meaningful results regarding the degradation behavior than conventional tests in simulated body fluids (SBF) or Phosphate Buffered Saline (PBS). Fiber mats were produced from a polycaprolactone (PCL)/tetrafluoroethylene solution by electrospinning. The morphology of the fiber mats was characterized via scanning electron microscopy (SEM). A maximum physiological degradation environment utilizing a test set-up with porcine full blood was established. The set-up consists of a reaction vessel, an oxygenator unit, and a roller pump. The blood parameters (pO2, pCO2, temperature, and pH) were monitored with an online test system. All tests were also carried out in the test circuit with SBF and PBS to compare conventional degradation media with the novel full blood setting. The polymer's degradation is quantified by SEM picture analysis, differential scanning calorimetry (DSC), and Raman spectroscopy. Tensile and cyclic loading tests were performed to evaluate the mechanical integrity of the scaffold. Preliminary results indicate that PCL degraded slower in full blood than in SBF and PBS. The uptake of water is more pronounced in the full blood group. Also, PCL preserved its mechanical integrity longer when degraded in full blood. Protein absorption increased during the degradation process. Red blood cells, platelets, and their aggregates adhered on the PCL. Presumably, the degradation led to a more hydrophilic polymeric surface which promoted the protein adsorption and the blood cell adhesion. Testing degradable implants in full blood allows for developing more reliable scaffold materials in the future. Material tests in small and large animal trials thereby can be focused on testing candidates that have proven to function well in an in-vivo-like setting.

Keywords: Electrospun scaffold, full blood degradation test, long-term polymer degradation, tissue engineered aortic heart valve

Procedia PDF Downloads 150
664 Investigation of the Carbon Dots Optical Properties Using Laser Scanning Confocal Microscopy and TimE-resolved Fluorescence Microscopy

Authors: M. S. Stepanova, V. V. Zakharov, P. D. Khavlyuk, I. D. Skurlov, A. Y. Dubovik, A. L. Rogach

Abstract:

Carbon dots are small carbon-based spherical nanoparticles, which are typically less than 10 nm in size that can be modified with surface passivation and heteroatoms doping. The light-absorbing ability of carbon dots has attracted a significant amount of attention in photoluminescence for bioimaging and fluorescence sensing applications owing to their advantages, such as tunable fluorescence emission, photo- and thermostability and low toxicity. In this study, carbon dots were synthesized by the solvothermal method from citric acid and ethylenediamine dissolved in water. The solution was heated for 5 hours at 200°C and then cooled down to room temperature. The carbon dots films were obtained by evaporation from a high-concentration aqueous solution. The increase of both luminescence intensity and light transmission was obtained as a result of a 405 nm laser exposure to a part of the carbon dots film, which was detected using a confocal laser scanning microscope (LSM 710, Zeiss). Blueshift up to 35 nm of the luminescence spectrum is observed as luminescence intensity, which is increased more than twofold. The exact value of the shift depends on the time of the laser exposure. This shift can be caused by the modification of surface groups at the carbon dots, which are responsible for long-wavelength luminescence. In addition, a shift of the absorption peak by 10 nm and a decrease in the optical density at the wavelength of 350 nm is detected, which is responsible for the absorption of surface groups. The obtained sample was also studied with time-resolved confocal fluorescence microscope (MicroTime 100, PicoQuant), which made it possible to receive a time-resolved photoluminescence image and construct emission decays of the laser-exposed and non-exposed areas. 5 MHz pulse rate impulse laser has been used as a photoluminescence excitation source. Photoluminescence decay was approximated by two exhibitors. The laser-exposed area has the amplitude of the first-lifetime component (A1) twice as much as before, with increasing τ1. At the same time, the second-lifetime component (A2) decreases. These changes evidence a modification of the surface groups of carbon dots. The detected effect can be used to create thermostable fluorescent marks, the physical size of which is bounded by the diffraction limit of the optics (~ 200-300 nm) used for exposure and to improve the optical properties of carbon dots or in the field of optical encryption. Acknowledgements: This work was supported by the Ministry of Science and Higher Education of Russian Federation, goszadanie no. 2019-1080 and financially supported by Government of Russian Federation, Grant 08-08.

Keywords: carbon dots, photoactivation, optical properties, photoluminescence and absorption spectra

Procedia PDF Downloads 165
663 Challenges of Carbon Trading Schemes in Africa

Authors: Bengan Simbarashe Manwere

Abstract:

The entire African continent, comprising 55 countries, holds a 2% share of the global carbon market. The World Bank attributes the continent’s insignificant share and participation in the carbon market to the limited access to electricity. Approximately 800 million people spread across 47 African countries generate as much power as Spain, with a population of 45million. Only South Africa and North Africa have carbon-reduction investment opportunities on the continent and dominate the 2% market share of the global carbon market. On the back of the 2015 Paris Agreement, South Africa signed into law the Carbon Tax Act 15 of 2019 and the Customs and Excise Amendment Act 13 of 2019 (Gazette No. 4280) on 1 June 2019. By these laws, South Africa was ushered into the league of active global carbon market players. By increasing the cost of production by the rate of R120/tCO2e, the tax intentionally compels the internalization of pollution as a cost of production and, relatedly, stimulate investment in clean technologies. The first phase covered the 1 June 2019 – 31 December 2022 period during which the tax was meant to escalate at CPI + 2% for Scope 1 emitters. However, in the second phase, which stretches from 2023 to 2030, the tax will escalate at the inflation rate only as measured by the consumer price index (CPI). The Carbon Tax Act provides for carbon allowances as mitigation strategies to limit agents’ carbon tax liability by up to 95% for fugitive and process emissions. Although the June 2019 Carbon Tax Act explicitly makes provision for a carbon trading scheme (CTS), the carbon trading regulations thereof were only finalised in December 2020. This points to a delay in the establishment of a carbon trading scheme (CTS). Relatedly, emitters in South Africa are not able to benefit from the 95% reduction in effective carbon tax rate from R120/tCO2e to R6/tCO2e as the Johannesburg Stock Exchange (JSE) has not yet finalized the establishment of the market for trading carbon credits. Whereas most carbon trading schemes have been designed and constructed from the beginning as new tailor-made systems in countries the likes of France, Australia, Romania which treat carbon as a financial product, South Africa intends, on the contrary, to leverage existing trading infrastructure of the Johannesburg Stock Exchange (JSE) and the Clearing and Settlement platforms of Strate, among others, in the interest of the Paris Agreement timelines. Therefore the carbon trading scheme will not be constructed from scratch. At the same time, carbon will be treated as a commodity in order to align with the existing institutional and infrastructural capacity. This explains why the Carbon Tax Act is silent about the involvement of the Financial Sector Conduct Authority (FSCA).For South Africa, there is need to establish they equilibrium stability of the CTS. This is important as South Africa is an innovator in carbon trading and the successful trading of carbon credits on the JSE will lead to imitation by early adopters first, followed by the middle majority thereafter.

Keywords: carbon trading scheme (CTS), Johannesburg stock exchange (JSE), carbon tax act 15 of 2019, South Africa

Procedia PDF Downloads 70
662 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration

Procedia PDF Downloads 166
661 Seeking Compatibility between Green Infrastructure and Recentralization: The Case of Greater Toronto Area

Authors: Sara Saboonian, Pierre Filion

Abstract:

There are two distinct planning approaches attempting to transform the North American suburb so as to reduce its adverse environmental impacts. The first one, the recentralization approach, proposes intensification, multi-functionality and more reliance on public transit and walking. It thus offers an alternative to the prevailing low-density, spatial specialization and automobile dependence of the North American suburb. The second approach concentrates instead on the provision of green infrastructure, which rely on natural systems rather than on highly engineered solutions to deal with the infrastructure needs of suburban areas. There are tensions between these two approaches as recentralization generally overlooks green infrastructure, which can be space consuming (as in the case of water retention systems), and thus conflicts with the intensification goals of recentralization. The research investigates three Canadian planned suburban centres in the Greater Toronto Area, where recentralization is the current planning practice, despite rising awareness of the benefits of green infrastructure. Methods include reviewing the literature on green infrastructure planning, a critical analysis of the Ontario provincial plans for recentralization, surveying residents’ preferences regarding alternative suburban development models, and interviewing officials who deal with the local planning of the three centres. The case studies expose the difficulties in creating planned suburban centres that accommodate green infrastructure while adhering to recentralization principles. Until now, planners have been mostly focussed on recentralization at the expense of green infrastructure. In this context, the frequent lack of compatibility between recentralization and the space requirements of green infrastructure explains the limited presence of such infrastructures in planned suburban centres. Finally, while much attention has been given in the planning discourse to the economic and lifestyle benefits of recentralization, much less has been made of the wide range of advantages of green infrastructure, which explains limited public mobilization over the development of green infrastructure networks. The paper will concentrate on ways of combining recentralization with green infrastructure strategies and identify the aspects of the two approaches that are most compatible with each other. The outcome of such blending will marry high density, public-transit oriented developments, which generate walkability and street-level animation, with the presence of green space, naturalized settings and reliance on renewable energy. The paper will advance a planning framework that will fuse green infrastructure with recentralization, thus ensuring the achievement of higher density and reduced reliance on the car along with the provision of critical ecosystem services throughout cities. This will support and enhance the objectives of both green infrastructure and recentralization.

Keywords: environmental-based planning, green infrastructure, multi-functionality, recentralization

Procedia PDF Downloads 131
660 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide

Authors: Michalina Aniola, Andrzej Katrusiak

Abstract:

4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.

Keywords: conformation, high-pressure, negative area compressibility, polymorphism

Procedia PDF Downloads 246
659 Social Enterprises in India: Conceptualization and Challenges

Authors: Prajakta Khare

Abstract:

There is a huge number of social enterprises operating in India, across all enterprise sizes and forms addressing diverse social issues. Some cases such as such as Aravind eye care, Narayana Hridalaya, SEWA have been studied extensively in management literature and are known cases in social entrepreneurship. But there are several smaller social enterprises in India that are not called so per se due to the lack of understanding of the concept. There is a lack of academic research on social entrepreneurship in India and the term ‘social entrepreneurship’ is not yet widely known in the country, even by people working in this field as was found by this study. The present study aims to identify the most prominent form of social enterprises in India, the profile of the entrepreneurs, challenges faced, the lessons (theory and practices) emerging from their functioning and finally the factors contributing to the enterprises’ success. This is a preliminary exploratory study using primary data from 30 social enterprises in India. The study used snow ball sampling and a qualitative analysis. Data was collected from founders of social enterprises through written structured questionnaires, open-ended interviews and field visits to enterprises. The sample covered enterprises across sectors such as environment, affordable education, children’s rights, rain water harvesting, women empowerment etc. The interview questions focused on founder’s background and motivation, qualifications, funding, challenges, founder’s understanding and perspectives on social entrepreneurship, government support, linkages with other organizations etc. apart from several others. The interviews were conducted across 3 languages - Hindi, Marathi, English and were then translated and transcribed. 50% of founders were women and 65% of the total founders were highly qualified with a MBA, PhD or MBBS. The most important challenge faced by these entrepreneurs is recruiting skilled people. When asked about their understanding of the term, founders had diverse perspectives. Also, their understandings about the term social enterprise and social entrepreneur were extremely varied. Some founders identified the terms with doing something good for the society, some thought that every business can be called a social enterprise. 35% of the founders were not aware of the term social entrepreneur/ social entrepreneurship. They said that they could identify themselves as social entrepreneurs after discussions with the researcher. The general perception in India is that ‘NGOs are corrupt’- fighting against this perception to secure funds is also another problem as pointed out by some founders. There are unique challenges that social entrepreneurs in India face, as the political, social, economic environment around them is rapidly changing; and getting adequate support from the government is a problem. The research in its subsequent stages aims to clarify existing, missing and new definitions of the term to provide deeper insights in the terminology and issues relating to Social Entrepreneurship in India.

Keywords: challenges, India, social entrepreneurship, social entrepreneurs

Procedia PDF Downloads 467
658 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology

Authors: Hemendra Singh Rathod

Abstract:

Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.

Keywords: frequency control, grid stability, li-ion battery storage, smart grid

Procedia PDF Downloads 150
657 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region

Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho

Abstract:

The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.

Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon

Procedia PDF Downloads 66
656 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall

Authors: Sylvain Perrin, Thomas Saillour

Abstract:

The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.

Keywords: beach, impacts, scour, seawall, waves

Procedia PDF Downloads 153
655 Wind Load Reduction Effect of Exterior Porous Skin on Facade Performance

Authors: Ying-Chang Yu, Yuan-Lung Lo

Abstract:

Building envelope design is one of the most popular design fields of architectural profession in nowadays. The main design trend of such system is to highlight the designer's aesthetic intention from the outlook of building project. Due to the trend of current façade design, the building envelope contains more and more layers of components, such as double skin façade, photovoltaic panels, solar control system, or even ornamental components. These exterior components are designed for various functional purposes. Most researchers focus on how these exterior elements should be structurally sound secured. However, not many researchers consider these elements would help to improve the performance of façade system. When the exterior elements are deployed in large scale, it creates an additional layer outside of original façade system and acts like a porous interface which would interfere with the aerodynamic of façade surface in micro-scale. A standard façade performance consists with 'water penetration, air infiltration rate, operation force, and component deflection ratio', and these key performances are majorly driven by the 'Design Wind Load' coded in local regulation. A design wind load is usually determined by the maximum wind pressure which occurs on the surface due to the geometry or location of building in extreme conditions. This research was designed to identify the air damping phenomenon of micro turbulence caused by porous exterior layer leading to surface wind load reduction for improvement of façade system performance. A series of wind tunnel test on dynamic pressure sensor array covered by various scale of porous exterior skin was conducted to verify the effect of wind pressure reduction. The testing specimens were designed to simulate the typical building with two-meter extension offsetting from building surface. Multiple porous exterior skins were prepared to replicate various opening ratio of surface which may cause different level of damping effect. This research adopted 'Pitot static tube', 'Thermal anemometers', and 'Hot film probe' to collect the data of surface dynamic pressure behind porous skin. Turbulence and distributed resistance are the two main factors of aerodynamic which would reduce the actual wind pressure. From initiative observation, the reading of surface wind pressure was effectively reduced behind porous media. In such case, an actual building envelope system may be benefited by porous skin from the reduction of surface wind pressure, which may improve the performance of envelope system consequently.

Keywords: multi-layer facade, porous media, facade performance, turbulence and distributed resistance, wind tunnel test

Procedia PDF Downloads 220
654 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping

Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco

Abstract:

Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.

Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction

Procedia PDF Downloads 225
653 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students

Authors: Majed A. Alsalem

Abstract:

In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.

Keywords: deaf and hard of hearing, digital books, literacy, technology

Procedia PDF Downloads 490
652 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 68
651 Pro-Environmental Behavioral Intention of Mountain Hikers to the Theory of Planned Behavior

Authors: Mohammad Ehsani, Iman Zarei, Soudabeh Moazemigoudarzi

Abstract:

The aim of this study is to determine Pro-Environmental Behavioral Intention of Mountain Hikers to the Theory of Planned Behavior. According to many researchers nature-based recreation activities play a significant role in the tourism industry and have provided myriad opportunities for the protection of natural areas. It is essential to investigate individuals' behavior during such activities to avoid further damage to precious and dwindling natural resources. This study develops a robust model that provides a comprehensive understanding of the formation of pro-environmental behavioral intentions among climbers of Mount Damavand National Park in Iran. To this end, we combined the theory of planned behavior (TPB), value-belief-norm theory (VBN), and a hierarchical model of leisure constraints to predict individuals’ pro-environmental hiking behavior during outdoor recreation. It was used structural equation modeling to test the theoretical framework. A sample of 787 climbers was analyzed. Among the theory of planned behavior variables, perceived behavioral control showed the strongest association with behavioral intention (β = .57). This relationship indicates that if people feel they can have fewer negative impacts on national resources while hiking, it will result in more environmentally acceptable behavior. Subjective norms had a moderate positive impact on behavioral intention, indicating the importance of other people on the individual's behavior. Attitude had a small positive effect on intention. Ecological worldview positively influenced attitude and personal belief. Personal belief (awareness of consequences and ascribed responsibility) showed a positive association with TPB variables. Although the data showed a high average score in awareness of consequences (mean = 4.219 out of 5), evidence from Damavand Mount shows that there are many environmental issues that need addressing (e.g., vast amounts of garbage). National park managers need to make sure that their solutions result in awareness about proenvironmental behavior (PEB). Findings showed that negative relationship between constraints and all TPB predictors. Providing proper restrooms and parking spaces in campgrounds, strategies controlling limiting capacity and solutions for removing waste from high altitudes are helpful to decrease the negative impact of structural constraints. In order to address intrapersonal constraints, managers should provide opportunities to interest individuals in environmental activities, such as environmental celebrations or making documentaries about environmental issues. Moreover, promoting a culture of environmental protection in the Damavand Mount area would reduce interpersonal constraints. Overall, the proposed model improved the explanatory power of the TPB by predicting 64.7% of intention compared to the original TPB that accounted for 63.8% of the variance in intention.

Keywords: theory of planned behavior, pro-environmental behavior, national park, constraints

Procedia PDF Downloads 94
650 Looking at Women’s Status in India through Different Lenses: Evidence from Second Wave of IHDS Data

Authors: Vidya Yadav

Abstract:

In every society, males and females are expected to behave in certain ways, and in every culture, those expectation, values and norms are different and vary accordingly. Many of the inequalities between men and women are rooted in institutional structure such as in educational field, labour market, wages, decision-making power, access to services as well as in accessing the health and well-being care also. The marriage and kinship pattern shape both men’s and women’s lives. Earlier many studies have highlighted the gender disparities which vary tremendously between regions, social classes, and communities. This study will try to explore the prominent indicators to show the status of women and well-being condition in Indian society. Primarily this paper concern with firstly identification of indicators related to gender in each area like education, work status, mobility, women participation in public and private decision making, autonomy and domestic violence etc. And once the indicators are identified next task is to define them. The indicators which are selected here are for a comparison of women’s status across Indian states. Recent Indian Human Development Survey, 2011-12 has been procured to show the current situation of women. Result shows that in spite of rising levels of education and images of growing westernization in India, love marriages remain in rarity even among urban elite. In India marriage is universal, and most of the men and women marry at relatively young age. Even though the legal age of marriage is 18, but more than 60 percent are married before the legal age. Not surprisingly, but Bihar and Rajasthan are the states with earliest age at marriage. Most of them reported that they have very limited contact with their husband before marriages. Around 69 percent of women met their husbands on the day of the wedding or shortly before. In spite of decline in fertility, still childbearing remains essential to women’s lives. Mostly women aged 25 and older had at least one child. Women’s control over household resources, physical space and mobility is also limited. Indian women’s, mostly rely on men to purchase day to day necessities, as well as medicines, as well as other necessary items. This ultimately reduces the likelihood that women have cash in hand for such purchases. The story is quite different when it comes to have control over decision over purchasing household assets such as TVs or refrigerator, names on the bank account, and home ownership papers. However, the likelihood of ownership rises among urbanite educated women’s. Women’s still have to the cultural norms and the practice of purdah or ghunghat, familial control over women’s physical movement. Wife beating and domestic violence still remain pervasive, and beaten for minor transgression like going out without permission. Development of India cannot be realized without the very significant component of gender. Therefore detailed examinations of different indicators are required to understand, strategize, plan and formulate programmes.

Keywords: autonomy, empowerment, gender, violence

Procedia PDF Downloads 297
649 Analyzing Concrete Structures by Using Laser Induced Breakdown Spectroscopy

Authors: Nina Sankat, Gerd Wilsch, Cassian Gottlieb, Steven Millar, Tobias Guenther

Abstract:

Laser-Induced Breakdown Spectroscopy (LIBS) is a combination of laser ablation and optical emission spectroscopy, which in principle can simultaneously analyze all elements on the periodic table. Materials can be analyzed in terms of chemical composition in a two-dimensional, time efficient and minor destructive manner. These advantages predestine LIBS as a monitoring technique in the field of civil engineering. The decreasing service life of concrete infrastructures is a continuously growing problematic. A variety of intruding, harmful substances can damage the reinforcement or the concrete itself. To insure a sufficient service life a regular monitoring of the structure is necessary. LIBS offers many applications to accomplish a successful examination of the conditions of concrete structures. A selection of those applications are the 2D-evaluation of chlorine-, sodium- and sulfur-concentration, the identification of carbonation depths and the representation of the heterogeneity of concrete. LIBS obtains this information by using a pulsed laser with a short pulse length (some mJ), which is focused on the surfaces of the analyzed specimen, for this only an optical access is needed. Because of the high power density (some GW/cm²) a minimal amount of material is vaporized and transformed into a plasma. This plasma emits light depending on the chemical composition of the vaporized material. By analyzing the emitted light, information for every measurement point is gained. The chemical composition of the scanned area is visualized in a 2D-map with spatial resolutions up to 0.1 mm x 0.1 mm. Those 2D-maps can be converted into classic depth profiles, as typically seen for the results of chloride concentration provided by chemical analysis like potentiometric titration. However, the 2D-visualization offers many advantages like illustrating chlorine carrying cracks, direct imaging of the carbonation depth and in general allowing the separation of the aggregates from the cement paste. By calibrating the LIBS-System, not only qualitative but quantitative results can be obtained. Those quantitative results can also be based on the cement paste, while excluding the aggregates. An additional advantage of LIBS is its mobility. By using the mobile system, located at BAM, onsite measurements are feasible. The mobile LIBS-system was already used to obtain chloride, sodium and sulfur concentrations onsite of parking decks, bridges and sewage treatment plants even under hard conditions like ongoing construction work or rough weather. All those prospects make LIBS a promising method to secure the integrity of infrastructures in a sustainable manner.

Keywords: concrete, damage assessment, harmful substances, LIBS

Procedia PDF Downloads 176
648 Modeling Atmospheric Correction for Global Navigation Satellite System Signal to Improve Urban Cadastre 3D Positional Accuracy Case of: TANA and ADIS IGS Stations

Authors: Asmamaw Yehun

Abstract:

The name “TANA” is one of International Geodetic Service (IGS) Global Positioning System (GPS) station which is found in Bahir Dar University in Institute of Land Administration. The station name taken from one of big Lakes in Africa ,Lake Tana. The Institute of Land Administration (ILA) is part of Bahir Dar University, located in the capital of the Amhara National Regional State, Bahir Dar. The institute is the first of its kind in East Africa. The station is installed by cooperation of ILA and Sweden International Development Agency (SIDA) fund support. The Continues Operating Reference Station (CORS) is a network of stations that provide global satellite system navigation data to help three dimensional positioning, meteorology, space, weather, and geophysical applications throughout the globe. TANA station was as CORS since 2013 and sites are independently owned and operated by governments, research and education facilities and others. The data collected by the reference station is downloadable through Internet for post processing purpose by interested parties who carry out GNSS measurements and want to achieve a higher accuracy. We made a first observation on TANA, monitor stations on May 29th 2013. We used Leica 1200 receivers and AX1202GG antennas and made observations from 11:30 until 15:20 for about 3h 50minutes. Processing of data was done in an automatic post processing service CSRS-PPP by Natural Resources Canada (NRCan) . Post processing was done June 27th 2013 so precise ephemeris was used 30 days after observation. We found Latitude (ITRF08): 11 34 08.6573 (dms) / 0.008 (m), Longitude (ITRF08): 37 19 44.7811 (dms) / 0.018 (m) and Ellipsoidal Height (ITRF08): 1850.958 (m) / 0.037 (m). We were compared this result with GAMIT/GLOBK processed data and it was very closed and accurate. TANA station is one of the second IGS station for Ethiopia since 2015 up to now. It provides data for any civilian users, researchers, governmental and nongovernmental users. TANA station is installed with very advanced choke ring antenna and GR25 Leica receiver and also the site is very good for satellite accessibility. In order to test hydrostatic and wet zenith delay for positional data quality, we used GAMIT/GLOBK and we found that TANA station is the most accurate IGS station in East Africa. Due to lower tropospheric zenith and ionospheric delay, TANA and ADIS IGS stations has 2 and 1.9 meters 3D positional accuracy respectively.

Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour

Procedia PDF Downloads 70
647 Compositional Influence in the Photovoltaic Properties of Dual Ion Beam Sputtered Cu₂ZnSn(S,Se)₄ Thin Films

Authors: Brajendra S. Sengar, Vivek Garg, Gaurav Siddharth, Nisheka Anadkat, Amitesh Kumar, Shaibal Mukherjee

Abstract:

The optimal band gap (~ 1 to 1.5 eV) and high absorption coefficient ~104 cm⁻¹ has made Cu₂ZnSn(S,Se)₄ (CZTSSe) films as one of the most promising absorber materials in thin-film photovoltaics. Additionally, CZTSSe consists of elements that are abundant and non-toxic, makes it even more favourable. The CZTSSe thin films are grown at 100 to 500ᵒC substrate temperature (Tsub) on Soda lime glass (SLG) substrate by Elettrorava dual ion beam sputtering (DIBS) system by utilizing a target at 2.43x10⁻⁴ mbar working pressure with RF power of 45 W in argon ambient. The chemical composition, depth profiling, structural properties and optical properties of these CZTSSe thin films prepared on SLG were examined by energy dispersive X-ray spectroscopy (EDX, Oxford Instruments), Hiden secondary ion mass spectroscopy (SIMS) workstation with oxygen ion gun of energy up to 5 keV, X-ray diffraction (XRD) (Rigaku Cu Kα radiation, λ=.154nm) and Spectroscopic Ellipsometry (SE, M-2000D from J. A. Woollam Co., Inc). It is observed that from that, the thin films deposited at Tsub=200 and 300°C show Cu-poor and Zn-rich states (i.e., Cu/(Zn + Sn) < 1 and Zn/Sn > 1), which is not the case for films grown at other Tsub. It has been reported that the CZTSSe thin films with the highest efficiency are typically at Cu-poor and Zn-rich states. The values of band gap in the fundamental absorption region of CZTSSe are found to be in the range of 1.23-1.70 eV depending upon the Cu/(Zn+Sn) ratio. It is also observed that there is a decline in optical band gap with the increase in Cu/(Zn+Sn) ratio (evaluated from EDX measurement). Cu-poor films are found to have higher optical band gap than Cu-rich films. The decrease in the band gap with the increase in Cu content in case of CZTSSe films may be attributed to changes in the extent of p-d hybridization between Cu d-levels and (S, Se) p-levels. CZTSSe thin films with Cu/(Zn+Sn) ratio in the range 0.86–1.5 have been successfully deposited using DIBS. Optical band gap of the films is found to vary from 1.23 to 1.70 eV based on Cu/(Zn+Sn) ratio. CZTSe films with Cu/ (Zn+Sn) ratio of .86 are found to have optical band gap close to the ideal band gap (1.49 eV) for highest theoretical conversion efficiency. Thus by tailoring the value of Cu/(Zn+Sn), CZTSSe thin films with the desired band gap could be obtained. Acknowledgment: We are thankful to DIBS, EDX, and XRD facility equipped at Sophisticated Instrument Centre (SIC) at IIT Indore. The authors B. S. S and A. K. acknowledge CSIR, and V. G. acknowledges UGC, India for their fellowships. B. S. S is thankful to DST and IUSSTF for BASE Internship Award. Prof. Shaibal Mukherjee is thankful to DST and IUSSTF for BASE Fellowship and MEITY YFRF award. This work is partially supported by DAE BRNS, DST CERI, and DST-RFBR Project under India-Russia Programme of Cooperation in Science and Technology. We are thankful to Mukul Gupta for SIMS facility equipped at UGC-DAE Indore.

Keywords: CZTSSe, DIBS, EDX, solar cell

Procedia PDF Downloads 250
646 Urban Planning Patterns after (COVID-19): An Assessment toward Resiliency

Authors: Mohammed AL-Hasani

Abstract:

The Pandemic COVID-19 altered the daily habits and affected the functional performance of the cities after this crisis leaving remarkable impacts on many metropolises worldwide. It is so obvious that having more densification in the city leads to more threats altering this main approach that was called for achieving sustainable development. The main goal to achieve resiliency in the cities, especially in forcing risks, is to deal with a planning system that is able to resist, absorb, accommodate and recover from the impacts that had been affected. Many Cities in London, Wuhan, New York, and others worldwide carried different planning approaches and varied in reaction to safeguard the impacts of the pandemic. The cities globally varied from the radiant pattern predicted by Le Corbusier, or having multi urban centers more like the approach of Frank Lloyd Wright’s Broadacre City, or having linear growth or gridiron expansion that was common by Doxiadis, compact pattern, and many other hygiene patterns. These urban patterns shape the spatial distribution and Identify both open and natural spaces with gentrified and gentrifying areas. This crisis paid attention to reassess many planning approaches and examine the existing urban patterns focusing more on the aim of continuity and resiliency in managing the crises within the rapid transformation and the power of market forces. According to that, this paper hypothesized that those urban planning patterns determine the method of reaction in assuring quarantine for the inhabitance and the performance of public services and need to be updated through carrying out an innovative urban management system and adopt further resilience patterns in prospective urban planning approaches. This paper investigates the adaptivity and resiliency of variant urban planning patterns regarding selected cities worldwide that affected by COVID-19 and their role in applying certain management strategies in controlling the pandemic spread, finding out the main potentials that should be included in prospective planning approaches. The examination encompasses the spatial arrangement, blocks definition, plots arrangement, and urban space typologies. This paper aims to investigate the urban patterns to deliberate also the debate between densification as one of the more sustainable planning approaches and disaggregation tendency that was followed after the pandemic by restructuring and managing its application according to the assessment of the spatial distribution and urban patterns. The biggest long-term threat to dense cities proves the need to shift to online working and telecommuting, creating a mixture between using cyber and urban spaces to remobilize the city. Reassessing spatial design and growth, open spaces, urban population density, and public awareness are the main solutions that should be carried out to face the outbreak in our current cities that should be managed from global to tertiary levels and could develop criteria for designing the prospective cities

Keywords: COVID-19, densification, resiliency, urban patterns

Procedia PDF Downloads 130
645 Engaging Women Entrepreneurs in School Adolescent Health Program to Ensure Menstrual Hygiene Management in Rural Bangladesh

Authors: Toslim Uddin Khan, Jesmin Akter, Mohiuddin Ahmed

Abstract:

Menstrual hygiene management (MHM) and personal health-care practice is a critical issue to prevent morbidity and other reproductive health complications among adolescent girls in Bangladesh. Inadequate access to water, sanitation and hygiene (WASH) facilities lead to unhealthy MHM practices that resulted in poor reproductive health outcomes. It is evident from different studies that superstitions and misconception are more common in rural communities that limit young girls’ access to and understanding of the menstrual hygiene and self care practices. The state-of-the-art approach of Social Marketing Company (SMC) is proved to be instrumental in delivering reinforcing health messages, making public health and hygiene products available at the door steps of the community through community mobilization programs in rural Bangladesh. School health program is one of the flagship interventions of SMC to equip adolescent girls and boys with correct knowledge of health and hygiene practices among themselves, their families and peers. In Bangladeshi culture, adolescent girls often feel shy to ask fathers or male family members about buying sanitary napkin from local pharmacy and they seem to be reluctant to seek help regarding their menstrual problems. A recent study reveals that 48% adolescent girls are using sanitary napkins while majority of them are unaware of menstrual hygiene practices in Bangladesh. Under school adolescent program, SMC organizes health education sessions for adolescent girls from grade seven to ten using enter-educate approach with special focus on sexual and reproductive health and menstrual hygiene issues including delaying marriage and first pregnancy. In addition, 2500 rural women entrepreneurs branded as community sales agents are also involved in disseminating health messages and selling priority health products including sanitary napkin at the household level. These women entrepreneurs are serving as a source of sustainable supply of the sanitary napkins for the rural adolescent girls and thereby they are earning profit margins on the sales they make. A recent study on the impact of adolescent program activities reveals that majority (71%) of the school adolescent girls are currently using sanitary napkins. Health education equips and empowers adolescent girls with accurate knowledge about menstrual hygiene practices and self-care as well. Therefore, engagement of female entrepreneurs in school adolescent health program at the community level is one of the promising ways to improve menstrual hygiene practices leading to increased use of sanitary napkin in rural and semi-rural communities in Bangladesh.

Keywords: school adolescent program, social marketing, women entrepreneurs, menstrual hygiene management

Procedia PDF Downloads 198
644 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion

Procedia PDF Downloads 218
643 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete

Authors: D. Falliano, G. Ricciardi, E. Gugliandolo

Abstract:

Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.

Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.

Procedia PDF Downloads 174
642 The Sustained Utility of Japan's Human Security Policy

Authors: Maria Thaemar Tana

Abstract:

The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.

Keywords: human security, foreign policy, neoclassical realism, peace-building

Procedia PDF Downloads 133
641 Modeling Landscape Performance: Evaluating the Performance Benefits of the Olmsted Brothers’ Proposed Parkway Designs for Los Angeles

Authors: Aaron Liggett

Abstract:

This research focuses on the visionary proposal made by the Olmsted Brothers Landscape Architecture firm in the 1920s for a network of interconnected parkways in Los Angeles. Their envisioned parkways aimed to address environmental and cultural strains by providing green space for recreation, wildlife habitat, and stormwater management while serving as multimodal transportation routes. Although the parkways were never constructed, through an evidence-based approach, this research presents a framework for evaluating the potential functionality and success of the parkways by modeling and visualizing their quantitative and qualitative landscape performance and benefits. Historical documents and innovative digital modeling tools produce detailed analysis, modeling, and visualization of the parkway designs. A set of 1928 construction documents are used to analyze and interpret the design intent of the parkways. Grading plans are digitized in CAD and modeled in Sketchup to produce 3D visualizations of the parkway. Drainage plans are digitized to model stormwater performance. Planting plans are analyzed to model urban forestry and biodiversity. The EPA's Storm Water Management Model (SWMM) predicts runoff quantity and quality. The USDA Forests Service tools evaluate carbon sequestration and air quality. Spatial and overlay analysis techniques are employed to assess urban connectivity and the spatial impacts of the parkway designs. The study reveals how the integration of blue infrastructure, green infrastructure, and transportation infrastructure within the parkway design creates a multifunctional landscape capable of offering alternative spatial and temporal uses. The analysis demonstrates the potential for multiple functional, ecological, aesthetic, and social benefits to be derived from the proposed parkways. The analysis of the Olmsted Brothers' proposed Los Angeles parkways, which predated contemporary ecological design and resiliency practices, demonstrates the potential for providing multiple functional, ecological, aesthetic, and social benefits within urban designs. The findings highlight the importance of integrated blue, green, and transportation infrastructure in creating a multifunctional landscape that simultaneously serves multiple purposes. The research contributes new methods for modeling and visualizing landscape performance benefits, providing insights and techniques for informing future designs and sustainable development strategies.

Keywords: landscape architecture, ecological urban design, greenway, landscape performance

Procedia PDF Downloads 130
640 Ventilator Associated Pneumonia in a Medical Intensive Care Unit, Incidence and Risk Factors: A Case Control Study

Authors: Ammar Asma, Bouafia Nabiha, Ben Cheikh Asma, Ezzi Olfa, Mahjoub Mohamed, Sma Nesrine, Chouchène Imed, Boussarsar Hamadi, Njah Mansour

Abstract:

Background: Ventilator-associated pneumonia (VAP) is currently recognized as one of the most relevant causes of morbidity and mortality among intensive care unit (ICU) patients worldwide. Identifying modifiable risk factors for VAP could be helpful for future controlled interventional studies aiming at improving prevention of VAP. The purposes of this study were to determine the incidence and risk factors for VAP in in a Tunisian medical ICU. Materials / Methods: A retrospective case-control study design based on the prospective database collected over a 14-month period from September 15th, 2015 through November 15th, 2016 in an 8-bed medical ICU. Patients under ventilation for over 48 h were included. The number of cases was estimated by Epi-info Software with the power of statistical test equal to 90 %. Each case patient was successfully matched to two controls according to the length of mechanical ventilation (MV) before VAP for cases and the total length of MV in controls. VAP in the ICU was defined according to American Thoracic Society; Infectious Diseases Society of America guidelines. Early onset or late-onset VAP were defined whether the infectious process occurred within or after 96 h of ICU admission. Patients’ risk factors, causes of admission, comorbidities and respiratory specimens collected were reviewed. Univariate and multivariate analyses were performed to determine variables associated with VAP with a p-value < 0.05. Results: During the period study, a total of 169 patients under mechanical ventilation were considered, 34 patients (20.11%) developed at least one episode of VAP in the ICU. The incidence rate for VAP was 14.88/1000 ventilation days. Among these cases, 9 (26.5 %) were early-onset VAP and 25 (73.5 %) were late-onset VAP. It was a certain diagnosis in 66.7% of cases. Tracheal aspiration was positive in 80% of cases. Multi-drug resistant Acinerobacter baumanii was the most common species detected in cases; 67.64% (n=23). The rate of mortality out of cases was 88.23% (n= 30). In univariate analysis, the patients with VAP were statistically more likely to suffer from cardiovascular diseases (p=0.035) and prolonged duration of sedation (p=0.009) and tracheostomy (p=0.001), they also had a higher number of re-intubation (p=0.017) and a longer total time of intubation (p=0.012). Multivariate analysis showed that cardiovascular diseases (OR= 4.44; 95% IC= [1.3 - 14]; p=0.016), tracheostomy (OR= 4.2; 95% IC= [1.16 -15.12]; p= 0.028) and prolonged duration of sedation (OR=1.21; 95% IC= [1.07, 1.36]; p=0.002) were independent risk factors for the development of VAP. Conclusion: VAP constitutes a therapeutic challenge in an ICU setting, therefore; strategies that effectively prevent VAP are needed. An infection control-training program intended to all professional heath care in this unit insisting on bundles and elaboration of procedures are planned to reduce effectively incidence rate of VAP.

Keywords: case control study, intensive care unit, risk factors, ventilator associated pneumonia

Procedia PDF Downloads 395
639 Life Cycle Assessment Applied to Supermarket Refrigeration System: Effects of Location and Choice of Architecture

Authors: Yasmine Salehy, Yann Leroy, Francois Cluzel, Hong-Minh Hoang, Laurence Fournaison, Anthony Delahaye, Bernard Yannou

Abstract:

Taking into consideration all the life cycle of a product is now an important step in the eco-design of a product or a technology. Life cycle assessment (LCA) is a standard tool to evaluate the environmental impacts of a system or a process. Despite the improvement in refrigerant regulation through protocols, the environmental damage of refrigeration systems remains important and needs to be improved. In this paper, the environmental impacts of refrigeration systems in a typical supermarket are compared using the LCA methodology under different conditions. The system is used to provide cold at two levels of temperature: medium and low temperature during a life period of 15 years. The most commonly used architectures of supermarket cold production systems are investigated: centralized direct expansion systems and indirect systems using a secondary loop to transport the cold. The variation of power needed during seasonal changes and during the daily opening/closure periods of the supermarket are considered. R134a as the primary refrigerant fluid and two types of secondary fluids are considered. The composition of each system and the leakage rate of the refrigerant through its life cycle are taken from the literature and industrial data. Twelve scenarios are examined. They are based on the variation of three parameters, 1. location: France (Paris), Spain (Toledo) and Sweden (Stockholm), 2. different sources of electric consumption: photovoltaic panels and low voltage electric network and 3. architecture: direct and indirect refrigeration systems. OpenLCA, SimaPro softwares, and different impact assessment methods were compared; CML method is used to evaluate the midpoint environmental indicators. This study highlights the significant contribution of electric consumption in environmental damages compared to the impacts of refrigerant leakage. The secondary loop allows lowering the refrigerant amount in the primary loop which results in a decrease in the climate change indicators compared to the centralized direct systems. However, an exhaustive cost evaluation (CAPEX and OPEX) of both systems shows more important costs related to the indirect systems. A significant difference between the countries has been noticed, mostly due to the difference in electric production. In Spain, using photovoltaic panels helps to reduce efficiently the environmental impacts and the related costs. This scenario is the best alternative compared to the other scenarios. Sweden is a country with less environmental impacts. For both France and Sweden, the use of photovoltaic panels does not bring a significant difference, due to a less sunlight exposition than in Spain. Alternative solutions exist to reduce the impact of refrigerating systems, and a brief introduction is presented.

Keywords: eco-design, industrial engineering, LCA, refrigeration system

Procedia PDF Downloads 189
638 Performance Improvement of Piston Engine in Aeronautics by Means of Additive Manufacturing Technologies

Authors: G. Andreutti, G. Saccone, D. Lucariello, C. Pirozzi, S. Franchitti, R. Borrelli, C. Toscano, P. Caso, G. Ferraro, C. Pascarella

Abstract:

The reduction of greenhouse gases and pollution emissions is a worldwide environmental issue. The amount of CO₂ released by an aircraft is associated with the amount of fuel burned, so the improvement of engine thermo-mechanical efficiency and specific fuel consumption is a significant technological driver for aviation. Moreover, with the prospect that avgas will be phased out, an engine able to use more available and cheaper fuels is an evident advantage. An advanced aeronautical Diesel engine, because of its high efficiency and ability to use widely available and low-cost jet and diesel fuels, is a promising solution to achieve a more fuel-efficient aircraft. On the other hand, a Diesel engine has generally a higher overall weight, if compared with a gasoline one of same power performances. Fixing the MTOW, Max Take-Off Weight, and the operational payload, this extra-weight reduces the aircraft fuel fraction, partially vinifying the associated benefits. Therefore, an effort in weight saving manufacturing technologies is likely desirable. In this work, in order to achieve the mentioned goals, innovative Electron Beam Melting – EBM, Additive Manufacturing – AM technologies were applied to a two-stroke, common rail, GF56 Diesel engine, developed by the CMD Company for aeronautic applications. For this purpose, a consortium of academic, research and industrial partners, including CMD Company, Italian Aerospace Research Centre – CIRA, University of Naples Federico II and the University of Salerno carried out a technological project, funded by the Italian Minister of Education and Research – MIUR. The project aimed to optimize the baseline engine in order to improve its performance and increase its airworthiness features. This project was focused on the definition, design, development, and application of enabling technologies for performance improvement of GF56. Weight saving of this engine was pursued through the application of EBM-AM technologies and in particular using Arcam AB A2X machine, available at CIRA. The 3D printer processes titanium alloy micro-powders and it was employed to realize new connecting rods of the GF56 engine with an additive-oriented design approach. After a preliminary investigation of EBM process parameters and a thermo-mechanical characterization of titanium alloy samples, additive manufactured, innovative connecting rods were fabricated. These engine elements were structurally verified, topologically optimized, 3D printed and suitably post-processed. Finally, the overall performance improvement, on a typical General Aviation aircraft, was estimated, substituting the conventional engine with the optimized GF56 propulsion system.

Keywords: aeronautic propulsion, additive manufacturing, performance improvement, weight saving, piston engine

Procedia PDF Downloads 142
637 Structural Analysis of Archaeoseismic Records Linked to the 5 July 408 - 410 AD Utica Strong Earthquake (NE Tunisia)

Authors: Noureddine Ben Ayed, Abdelkader Soumaya, Saïd Maouche, Ali Kadri, Mongi Gueddiche, Hayet Khayati-Ammar, Ahmed Braham

Abstract:

The archaeological monument of Utica, located in north-eastern Tunisia, was founded (8th century BC) By the Phoenicians as a port installed on the trade route connecting Phoenicia and the Straits of Gibraltar in the Mediterranean Sea. The flourishment of this city as an important settlement during the Roman period was followed by a sudden abandonment, disuse and progressive oblivion in the first half of the fifth century AD. This decadence can be attributed to the destructive earthquake of 5 July 408 - 410 AD, affecting this historic city as documented in 1906 by the seismologist Fernand De Montessus De Ballore. The magnitude of the Utica earthquake was estimated at 6.8 by the Tunisian National Institute of Meteorology (INM). In order to highlight the damage caused by this earthquake, a field survey was carried out at the Utica ruins to detect and analyse the earthquake archaeological effects (EAEs) using structural geology methods. This approach allowed us to highlight several structural damages, including: (1) folded mortar pavements, (2) cracks affecting the mosaic and walls of a water basin in the "House of the Grand Oecus", (3) displaced columns, (4) block extrusion in masonry walls, (5) undulations in mosaic pavements, (6) tilted walls. The structural analysis of these EAEs and data measurements reveal a seismic cause for all evidence of deformation in the Utica monument. The maximum horizontal strain of the ground (e.g. SHmax) inferred from the building oriented damage in Utica shows a NNW-SSE direction under a compressive tectonic regime. For the seismogenic source of this earthquake, we propose the active E-W to NE-SW trending Utique - Ghar El Melh reverse fault, passing through the Utica Monument and extending towards the Ghar El Melh Lake, as the causative tectonic structure. The active fault trace is well supported by instrumental seismicity, geophysical data (e.g., gravity, seismic profiles) and geomorphological analyses. In summary, we find that the archaeoseismic records detected at Utica are similar to those observed at many other archaeological sites affected by destructive ancient earthquakes around the world. Furthermore, the calculated orientation of the average maximum horizontal stress (SHmax) closely match the state of the actual stress field, as highlighted by some earthquake focal mechanisms in this region.

Keywords: Tunisia, utica, seimogenic fault, archaeological earthquake effects

Procedia PDF Downloads 45
636 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 146