Search results for: law of energy conservation
451 Seeking Compatibility between Green Infrastructure and Recentralization: The Case of Greater Toronto Area
Authors: Sara Saboonian, Pierre Filion
Abstract:
There are two distinct planning approaches attempting to transform the North American suburb so as to reduce its adverse environmental impacts. The first one, the recentralization approach, proposes intensification, multi-functionality and more reliance on public transit and walking. It thus offers an alternative to the prevailing low-density, spatial specialization and automobile dependence of the North American suburb. The second approach concentrates instead on the provision of green infrastructure, which rely on natural systems rather than on highly engineered solutions to deal with the infrastructure needs of suburban areas. There are tensions between these two approaches as recentralization generally overlooks green infrastructure, which can be space consuming (as in the case of water retention systems), and thus conflicts with the intensification goals of recentralization. The research investigates three Canadian planned suburban centres in the Greater Toronto Area, where recentralization is the current planning practice, despite rising awareness of the benefits of green infrastructure. Methods include reviewing the literature on green infrastructure planning, a critical analysis of the Ontario provincial plans for recentralization, surveying residents’ preferences regarding alternative suburban development models, and interviewing officials who deal with the local planning of the three centres. The case studies expose the difficulties in creating planned suburban centres that accommodate green infrastructure while adhering to recentralization principles. Until now, planners have been mostly focussed on recentralization at the expense of green infrastructure. In this context, the frequent lack of compatibility between recentralization and the space requirements of green infrastructure explains the limited presence of such infrastructures in planned suburban centres. Finally, while much attention has been given in the planning discourse to the economic and lifestyle benefits of recentralization, much less has been made of the wide range of advantages of green infrastructure, which explains limited public mobilization over the development of green infrastructure networks. The paper will concentrate on ways of combining recentralization with green infrastructure strategies and identify the aspects of the two approaches that are most compatible with each other. The outcome of such blending will marry high density, public-transit oriented developments, which generate walkability and street-level animation, with the presence of green space, naturalized settings and reliance on renewable energy. The paper will advance a planning framework that will fuse green infrastructure with recentralization, thus ensuring the achievement of higher density and reduced reliance on the car along with the provision of critical ecosystem services throughout cities. This will support and enhance the objectives of both green infrastructure and recentralization.Keywords: environmental-based planning, green infrastructure, multi-functionality, recentralization
Procedia PDF Downloads 133450 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide
Authors: Michalina Aniola, Andrzej Katrusiak
Abstract:
4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.Keywords: conformation, high-pressure, negative area compressibility, polymorphism
Procedia PDF Downloads 247449 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables
Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck
Abstract:
The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.Keywords: buildings as material banks, building stock, estimation method, interior wall area
Procedia PDF Downloads 32448 Effect of Fuel Type on Design Parameters and Atomization Process for Pressure Swirl Atomizer and Dual Orifice Atomizer for High Bypass Turbofan Engine
Authors: Mohamed K. Khalil, Mohamed S. Ragab
Abstract:
Atomizers are used in many engineering applications including diesel engines, petrol engines and spray combustion in furnaces as well as gas turbine engines. These atomizers are used to increase the specific surface area of the fuel, which achieve a high rate of fuel mixing and evaporation. In all combustion systems reduction in mean drop size is a challenge which has many advantages since it leads to rapid and easier ignition, higher volumetric heat release rate, wider burning range and lower exhaust concentrations of the pollutant emissions. Pressure atomizers have a different configuration for design such as swirl atomizer (simplex), dual orifice, spill return, plain orifice, duplex and fan spray. Simplex pressure atomizers are the most common type of all. Among all types of atomizers, pressure swirl types resemble a special category since they differ in quality of atomization, the reliability of operation, simplicity of construction and low expenditure of energy. But, the disadvantages of these atomizers are that they require very high injection pressure and have low discharge coefficient owing to the fact that the air core covers the majority of the atomizer orifice. To overcome these problems, dual orifice atomizer was designed. This paper proposes a detailed mathematical model design procedure for both pressure swirl atomizer (Simplex) and dual orifice atomizer, examines the effects of varying fuel type and makes a clear comparison between the two types. Using five types of fuel (JP-5, JA1, JP-4, Diesel and Bio-Diesel) as a case study, reveal the effect of changing fuel type and its properties on atomizers design and spray characteristics. Which effect on combustion process parameters; Sauter Mean Diameter (SMD), spray cone angle and sheet thickness with varying the discharge coefficient from 0.27 to 0.35 during takeoff for high bypass turbofan engines. The spray atomizer performance of the pressure swirl fuel injector was compared to the dual orifice fuel injector at the same differential pressure and discharge coefficient using Excel. The results are analyzed and handled to form the final reliability results for fuel injectors in high bypass turbofan engines. The results show that the Sauter Mean Diameter (SMD) in dual orifice atomizer is larger than Sauter Mean Diameter (SMD) in pressure swirl atomizer, the film thickness (h) in dual orifice atomizer is less than the film thickness (h) in pressure swirl atomizer. The Spray Cone Angle (α) in pressure swirl atomizer is larger than Spray Cone Angle (α) in dual orifice atomizer.Keywords: gas turbine engines, atomization process, Sauter mean diameter, JP-5
Procedia PDF Downloads 165447 Synthesis and Characterization of High-Aspect-Ratio Hematite Nanostructures for Solar Water Splitting
Authors: Paula Quiterio, Arlete Apolinario, Celia T. Sousa, Joao Azevedo, Paula Dias, Adelio Mendes, Joao P. Araujo
Abstract:
Nowadays one of the mankind's greatest challenges has been the supply of low-cost and environmentally friendly energy sources as an alternative to non-renewable fossil fuels. Hydrogen has been considered a promising solution, representing a clean and low-cost fuel. It can be produced directly from clean and abundant resources, such as sunlight and water, using photoelectrochemical cells (PECs), in a process that mimics the nature´s photosynthesis. Hematite (alpha-Fe2O3) has attracted considerable attention as a promising photoanode for solar water splitting, due to its high chemical stability, nontoxicity, availability and low band gap (2.2 eV), which allows reaching a high thermodynamic solar-to-hydrogen efficiency of 16.8 %. However, the main drawbacks of hematite such as the short hole diffusion length and the poor conductivity that lead to high electron-hole recombination result in significant PEC efficiency losses. One strategy to overcome these limitations and to increase the PEC efficiency is to use 1D nanostructures, such as nanotubes (NTs) and nanowires (NWs), which present high aspect ratios and large surface areas providing direct pathways for electron transport up to the charge collector and minimizing the recombination losses. In particular, due to the ultrathin walls of the NTs, the holes can reach the surface faster than in other nanostructures, representing a key factor for the NTs photoresponse. In this work, we prepared hematite NWs and NTs, respectively by hydrothermal process and electrochemical anodization. For hematite NWs growing, we studied the effect of variable hydrothermal conditions, different annealing temperatures and time, and the use of Ti and Sn dopants on the morphology and PEC performance. The crystalline phase characterization by X-ray diffraction was crucial to distinguish the formation of hematite and other iron oxide phases, alongside its effect on the photoanodes conductivity and consequent PEC efficiency. The conductivity of the as-prepared NWs is very low, in the order of 10-5 S cm-1, but after doping and annealing optimization it increased by a factor of 105. A high photocurrent density of 1.02 mA cm-2 at 1.45 VRHE was obtained under simulated sunlight, which is a very promising value for this kind of hematite nanostructures. The stability of the photoelectrodes was also tested, presenting good stability after several J-V measurements over time. The NTs, synthesized by fast anodizations with potentials ranging from 20-100 V, presented a linear growth of the NTs pore walls, with very low thicknesses from 10 - 18 nm. These preliminary results are also very promising for the use of hematite photoelectrodes on PEC hydrogen applications.Keywords: hematite, nanotubes, nanowires, photoelectrochemical cells
Procedia PDF Downloads 230446 Corrosion Analysis of Brazed Copper-Based Conducts in Particle Accelerator Water Cooling Circuits
Authors: A. T. Perez Fontenla, S. Sgobba, A. Bartkowska, Y. Askar, M. Dalemir Celuch, A. Newborough, M. Karppinen, H. Haalien, S. Deleval, S. Larcher, C. Charvet, L. Bruno, R. Trant
Abstract:
The present study investigates the corrosion behavior of copper (Cu) based conducts predominantly brazed with Sil-Fos (self-fluxing copper-based filler with silver and phosphorus) within various cooling circuits of demineralized water across different particle accelerator components at CERN. The study covers a range of sample service time, from a few months to fifty years, and includes various accelerator components such as quadrupoles, dipoles, and bending magnets. The investigation comprises the established sample extraction procedure, examination methodology including non-destructive testing, evaluation of the corrosion phenomena, and identification of commonalities across the studied components as well as analysis of the environmental influence. The systematic analysis included computed microtomography (CT) of the joints that revealed distributed defects across all brazing interfaces. Some defects appeared to result from areas not wetted by the filler during the brazing operation, displaying round shapes, while others exhibited irregular contours and radial alignment, indicative of a network or interconnection. The subsequent dry cutting performed facilitated access to the conduct's inner surface and the brazed joints for further inspection through light and electron microscopy (SEM) and chemical analysis via Energy Dispersive X-ray spectroscopy (EDS). Brazing analysis away from affected areas identified the expected phases for a Sil-Fos alloy. In contrast, the affected locations displayed micrometric cavities propagating into the material, along with selective corrosion of the bulk Cu initiated at the conductor-braze interface. Corrosion product analysis highlighted the consistent presence of sulfur (up to 6 % in weight), whose origin and role in the corrosion initiation and extension is being further investigated. The importance of this study is paramount as it plays a crucial role in comprehending the underlying factors contributing to recently identified water leaks and evaluating the extent of the issue. Its primary objective is to provide essential insights for the repair of impacted brazed joints when accessibility permits. Moreover, the study seeks to contribute to the improvement of design and manufacturing practices for future components, ultimately enhancing the overall reliability and performance of magnet systems within CERN accelerator facilities.Keywords: accelerator facilities, brazed copper conducts, demineralized water, magnets
Procedia PDF Downloads 46445 Life Cycle Assessment to Study the Acidification and Eutrophication Impacts of Sweet Cherry Production
Authors: G. Bravo, D. Lopez, A. Iriarte
Abstract:
Several organizations and governments have created a demand for information about the environmental impacts of agricultural products. Today, the export oriented fruit sector in Chile is being challenged to quantify and reduce their environmental impacts. Chile is the largest southern hemisphere producer and exporter of sweet cherry fruit. Chilean sweet cherry production reached a volume of 80,000 tons in 2012. The main destination market for the Chilean cherry in 2012 was Asia (including Hong Kong and China), taking in 69% of exported volume. Another important market was the United States with 16% participation, followed by Latin America (7%) and Europe (6%). Concerning geographical distribution, the Chilean conventional cherry production is focused in the center-south area, between the regions of Maule and O’Higgins; both regions represent 81% of the planted surface. The Life Cycle Assessment (LCA) is widely accepted as one of the major methodologies for assessing environmental impacts of products or services. The LCA identifies the material, energy, material, and waste flows of a product or service, and their impact on the environment. There are scant studies that examine the impacts of sweet cherry cultivation, such as acidification and eutrophication. Within this context, the main objective of this study is to evaluate, using the LCA, the acidification and eutrophication impacts of sweet cherry production in Chile. The additional objective is to identify the agricultural inputs that contributed significantly to the impacts of this fruit. The system under study included all the life cycle stages from the cradle to the farm gate (harvested sweet cherry). The data of sweet cherry production correspond to nationwide representative practices and are based on technical-economic studies and field information obtained in several face-to-face interviews. The study takes into account the following agricultural inputs: fertilizers, pesticides, diesel consumption for agricultural operations, machinery and electricity for irrigation. The results indicated that the mineral fertilizers are the most important contributors to the acidification and eutrophication impacts of the sheet cherry cultivation. Improvement options are suggested for the hotspot in order to reduce the environmental impacts. The results allow planning and promoting low impacts procedures across fruit companies, as well as policymakers, and other stakeholders on the subject. In this context, this study is one of the first assessments of the environmental impacts of sweet cherry production. New field data or evaluation of other life cycle stages could further improve the knowledge on the impacts of this fruit. This study may contribute to environmental information in other countries where there is similar agricultural production for sweet cherry.Keywords: acidification, eutrophication, life cycle assessment, sweet cherry production
Procedia PDF Downloads 271444 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay
Authors: Anderson Braga Mendes
Abstract:
Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance
Procedia PDF Downloads 275443 Change of Substrate in Solid State Fermentation Can Produce Proteases and Phytases with Extremely Distinct Biochemical Characteristics and Promising Applications for Animal Nutrition
Authors: Paula K. Novelli, Margarida M. Barros, Luciana F. Flueri
Abstract:
Utilization of agricultural by-products, wheat ban and soybean bran, as substrate for solid state fermentation (SSF) was studied, aiming the achievement of different enzymes from Aspergillus sp. with distinct biological characteristics and its application and improvement on animal nutrition. Aspergillus niger and Aspergillus oryzea were studied as they showed very high yield of phytase and protease production, respectively. Phytase activity was measure using p-nitrophenilphosphate as substrate and a standard curve of p-nitrophenol, as the enzymatic activity unit was the quantity of enzyme necessary to release one μmol of p-nitrophenol. Protease activity was measure using azocasein as substrate. Activity for phytase and protease substantially increased when the different biochemical characteristics were considered in the study. Optimum pH and stability of the phytase produced by A. niger with wheat bran as substrate was between 4.0 - 5.0 and optimum temperature of activity was 37oC. Phytase fermented in soybean bran showed constant values at all pHs studied, for optimal and stability, but low production. Phytase with both substrates showed stable activity for temperatures higher than 80oC. Protease from A. niger showed very distinct behavior of optimum pH, acid for wheat bran and basic for soybean bran, respectively and optimal values of temperature and stability at 50oC. Phytase produced by A. oryzae in wheat bran had optimum pH and temperature of 9 and 37oC, respectively, but it was very unstable. On the other hand, proteases were stable at high temperatures, all pH’s studied and showed very high yield when fermented in wheat bran, however when it was fermented in soybean bran the production was very low. Subsequently the upscale production of phytase from A. niger and proteases from A. oryzae were applied as an enzyme additive in fish fed for digestibility studies. Phytases and proteases were produced with stable enzyme activity of 7,000 U.g-1 and 2,500 U.g-1, respectively. When those enzymes were applied in a plant protein based fish diet for digestibility studies, they increased protein, mineral, energy and lipids availability, showing that these new enzymes can improve animal production and performance. In conclusion, the substrate, as well as, the microorganism species can affect the biochemical character of the enzyme produced. Moreover, the production of these enzymes by SSF can be up to 90% cheaper than commercial ones produced with the same fungi species but submerged fermentation. Add to that these cheap enzymes can be easily applied as animal diet additives to improve production and performance.Keywords: agricultural by-products, animal nutrition, enzymes production, solid state fermentation
Procedia PDF Downloads 326442 Hydrodynamic Analysis of Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters
Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut
Abstract:
- In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.Keywords: underwater vehicles, submarine, autonomous underwater vehicles, auv, computational fluid dynamics, flow fields, pressure, turbulence, drag
Procedia PDF Downloads 79441 Pixel Façade: An Idea for Programmable Building Skin
Authors: H. Jamili, S. Shakiba
Abstract:
Today, one of the main concerns of human beings is facing the unpleasant changes of the environment. Buildings are responsible for a significant amount of natural resources consumption and carbon emissions production. In such a situation, this thought comes to mind that changing each building into a phenomenon of benefit to the environment. A change in a way that each building functions as an element that supports the environment, and construction, in addition to answering the need of humans, is encouraged, the way planting a tree is, and it is no longer seen as a threat to alive beings and the planet. Prospect: Today, different ideas of developing materials that can smartly function are realizing. For instance, Programmable Materials, which in different conditions, can respond appropriately to the situation and have features of modification in shape, size, physical properties and restoration, and repair quality. Studies are to progress having this purpose to plan for these materials in a way that they are easily available, and to meet this aim, there is no need to use expensive materials and high technologies. In these cases, physical attributes of materials undertake the role of sensors, wires and actuators then materials will become into robots itself. In fact, we experience robotics without robots. In recent decades, AI and technology advances have dramatically improving the performance of materials. These achievements are a combination of software optimizations and physical productions such as multi-materials 3D printing. These capabilities enable us to program materials in order to change shape, appearance, and physical properties to interact with different situations. nIt is expected that further achievements like Memory Materials and Self-learning Materials are also added to the Smart Materials family, which are affordable, available, and of use for a variety of applications and industries. From the architectural standpoint, the building skin is significantly considered in this research, concerning the noticeable surface area the buildings skin have in urban space. The purpose of this research would be finding a way that the programmable materials be used in building skin with the aim of having an effective and positive interaction. A Pixel Façade would be a solution for programming a building skin. The Pixel Facadeincludes components that contain a series of attributes that help buildings for their needs upon their environmental criteria. A PIXEL contains series of smart materials and digital controllers together. It not only benefits its physical properties, such as control the amount of sunlight and heat, but it enhances building performance by providing a list of features, depending on situation criteria. The features will vary depending on locations and have a different function during the daytime and different seasons. The primary role of a PIXEL FAÇADE can be defined as filtering pollutions (for inside and outside of the buildings) and providing clean energy as well as interacting with other PIXEL FACADES to estimate better reactions.Keywords: building skin, environmental crisis, pixel facade, programmable materials, smart materials
Procedia PDF Downloads 89440 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 147439 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc
Abstract:
Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.Keywords: numerical model, additive manufacturing, friction, process
Procedia PDF Downloads 147438 Plastic Pollution: Analysis of the Current Legal Framework and Perspectives on Future Governance
Authors: Giorgia Carratta
Abstract:
Since the beginning of mass production, plastic items have been crucial in our daily lives. Thanks to their physical and chemical properties, plastic materials have proven almost irreplaceable in a number of economic sectors such as packaging, automotive, building and construction, textile, and many others. At the same time, the disruptive consequences of plastic pollution have been progressively brought to light in all environmental compartments. The overaccumulation of plastics in the environment, and its adverse effects on habitats, wildlife, and (most likely) human health, represents a call for action to decision-makers around the globe. From a regulatory perspective, plastic production is an unprecedented challenge at all levels of governance. At the international level, the design of new legal instruments, the amendment of existing ones, and the coordination among the several relevant policy areas requires considerable effort. Under the pressure of both increasing scientific evidence and a concerned public opinion, countries seem to slowly move towards the discussion of a new international ‘plastic treaty.’ However, whether, how, and with which scopes such instrument would be adopted is still to be seen. Additionally, governments are establishing regional-basedstrategies, prone to consider the specificities of the plastic issue in a certain geographical area. Thanks to the new Circular Economy Action Plan, approved in March 2020 by the European Commission, EU countries are slowly but steadily shifting to a carbon neutral, circular economy in the attempt to reduce the pressure on natural resources and, parallelly, facilitate sustainable economic growth. In this context, the EU Plastic Strategy is promising to change the way plastic is designed, produced, used, and treated after consumption. In fact, only in the EU27 Member States, almost 26 million tons of plastic waste are generated herein every year, whose 24,9% is still destined to landfill. Positive effects of the Strategy also include a more effective protection of our environment, especially the marine one, the reduction of greenhouse gas emissions, a reduced need for imported fossil energy sources, more sustainable production and consumption patterns. As promising as it may sound, the road ahead is still long. The need to implement these measures in domestic legislations makes their outcome difficult to predict at the moment. An analysis of the current international and European Union legal framework on plastic pollution, binding, and voluntary instruments included, could serve to detect ‘blind spots’ in the current governance as well as to facilitate the development of policy interventions along the plastic value chain, where it appears more needed.Keywords: environmental law, European union, governance, plastic pollution, sustainability
Procedia PDF Downloads 109437 Influence of Mothers’ Knowledge, Attitude and Behavior on Diet and Physical Activity of Their Pre-School Children: A Cross-Sectional Study from a Semi-Urban Area of Nepal
Authors: Natalia Oli, Abhinav Vaidya, Katja Pahkala, Gabriele Eiben, Alexandra Krettek
Abstract:
The nutritional transition towards a high fat and energy dense diet, decreasing physical activity level, and poor cardiovascular health knowledge contributes to a rising burden of cardiovascular diseases in Nepal. Dietary and physical activity behaviors are formed early in life and influenced by family, particularly by mothers in the social context of Nepal. The purpose of this study was to explore knowledge, attitude and behavior of mothers regarding diet and physical activity of their pre-school children. Cross-sectional study was conducted in the semi-urban area of Duwakot and Jhaukhel communities near the capital Kathmandu. Between August and November 2014, nine trained enumerators interviewed all mothers having children aged 2 to 7 years in their homes. Questionnaire contained information about mothers’ socio-demographic characteristics; their knowledge, attitude, and behavior regarding diet and physical activity as well as their children’s diet and physical activity. Knowledge, attitude and behavior responses were scored. SPSS version 22.0 was used for data analyses. Out of the 1,052 eligible mothers, 962 consented to participate in the study. The mean age was 28.9 ± 4.5 years. The majority of them (73%) were housewives. Mothers with higher education and income had higher knowledge, attitude, and behavior scores (All p < 0.001) whereas housewives and farmers had low knowledge score (p < 0.001). They, along with laborers, also exhibited lower attitude (p<0.001) and behavior scores (p < 0.001). Children’s diet score increased with mothers’ level of education (p <0.001) and income (p=0.041). Their physical activity score, however, declined with increasing level of their mothers’ education (p < 0.001) and income (p < 0.001). Children’s overall behavior score correlated poorly with mothers’ knowledge (r = 0.009, p=0.003), attitude (r =0.012, p=0.001), and behavior (r = 0.007, p= 0.008). Such poor correlation can be due to existence of the barriers among mothers. Mothers reported such barriers as expensive healthy food, difficulty to give up favorite food, taste preference of others family members and lack of knowledge on healthy food. Barriers for physical activity were lack of leisure time, lack of parks and playgrounds, being busy by caring for children and old people, feeling lazy and embarrassed in front of others. Additionally, among the facilitators for healthy lifestyle, mentioned by mothers, were better information, family eating healthy food and supporting physical activity, advice of medical personnel regarding healthy lifestyle and own ill health. The study demonstrated poor correlation of mothers’ knowledge and attitude with children’s behavior regarding diet and physical activity. Hence improving mothers’ knowledge or attitude may not be enough to improve dietary and physical activity habits of their children. Barriers and facilitators that affect mothers’ practices towards their children should also be addressed due to future intervention.Keywords: attitude, behavior, diet, knowledge, mothers, physical activity
Procedia PDF Downloads 289436 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model
Authors: Kang Lin Peng
Abstract:
Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature
Procedia PDF Downloads 129435 An Exploration of Possible Impact of Drumming on Mental Health in a Hospital Setting
Authors: Zhao Luqian, Wang Yafei
Abstract:
Participation in music activities is beneficial for enhancing wellbeing, especially for aged people (Creech, 2013). Looking at percussion group in particular, it can facilitate a sense of belonging, relaxation, energy, and productivity, learning, enhanced mood, humanising, seems of accomplishment, escape from trauma, and emotional expression (Newman, 2015). In health literatures, group drumming is effective in reducing stress and improving multiple domains of social-motional behaviors (Ho et al., 2011; Maschi et al., 2010) because it offers a creative and mutual learning space that allows patients to establish a positive peer interaction (Mungas et al., 2014; Perkins, 2016). However, very few studies have investigated the effect of group drumming from the aspect of patients’ needs. Therefore, this study focuses on the discussion of patients' specific needs within mental health and explores how group percussion may meet their needs. Seligman’s (2011) five core elements of mental health were applied as patients’ needs in this study: (1) Positive emotions; (2) Engagement; (3) Relationships; (4) Meaning and (5) Accomplishment. 12 participants aged 57- 80 years were interviewed individually. The researcher also had observation in four drumming groups simultaneously. The results reveal that group drumming could improve participants’ mental wellbeing. First, it created a therapeutic health care environment extending beyond the elimination of boredom, and patients could focus on positive emotions during the session of group drumming. Secondly, it was effective in satisfying patients’ level of engagement. Thirdly, this study found that joining a percussion group would require patients to work on skills such as turn-taking and sharing. This equal relationship is helpful for releasing patients’ negative mood and thus forming tighter relationships between and among them. Fourthly, group drumming was found to meet patients’ meaning needs through offering them a place of belonging and a place for sharing. Its leaner-oriented approach engaged patients by a sense of belonging, accepting, connecting, and ownership. Finally, group drumming could meet patients’ needs for accomplishment through the learning process. The inclusive learning process, which indicates there is no right or wrong throughout the process, allowed patients to make their own decisions. In conclusion, it is difficult for patients to achieve positive emotions, engagement, relationships, meanings, and accomplishments in a hospital setting. Drumming can be practiced for enhancement in terms of reducing patients’ negative emotions and improving their experiences in a hospital through enriched social interaction and sense of accomplishment. Also, it can help patients to enhance social skills in a controlled environment.Keywords: group drumming, hospital, mental health, music psychology
Procedia PDF Downloads 90434 Necessity for a Standardized Occupational Health and Safety Management System: An Exploratory Study from the Danish Offshore Wind Sector
Authors: Dewan Ahsan
Abstract:
Denmark is well ahead in generating electricity from renewable sources. The offshore wind sector is playing the pivotal role to achieve this target. Though there is a rapid growth of offshore wind sector in Denmark, still there is a dearth of synchronization in OHS (occupational health and safety) regulation and standards. Therefore, this paper attempts to ascertain: i) what are the major challenges of the company specific OHS standards? ii) why does the offshore wind industry need a standardized OHS management system? and iii) who can play the key role in this process? To achieve these objectives, this research applies the interview and survey techniques. This study has identified several key challenges in OHS management system which are; gaps in coordination and communication among the stakeholders, gaps in incident reporting systems, absence of a harmonized OHS standard and blame culture. Furthermore, this research has identified eleven key stakeholders who are actively involve with the offshore wind business in Denmark. As noticed, the relationships among these stakeholders are very complex specially between operators and sub-contractors. The respondent technicians are concerned with the compliance of various third-party OHS standards (e.g. ISO 31000, ISO 29400, Good practice guidelines by G+) which are applying by various offshore companies. On top of these standards, operators also impose their own OHS standards. From the technicians point of angle, many of these standards are not even specific for the offshore wind sector. So, it is a big challenge for the technicians and sub-contractors to comply with different company specific standards which also elevate the price of their services offer to the operators. For instance, when a sub-contractor is competing for a bidding, it must fulfill a number of OHS requirements (which demands many extra documantions) set by the individual operator and/the turbine supplier. According to sub-contractors’ point of view these extra works consume too much time to prepare the bidding documents and they also need to train their employees to pass the specific OHS certification courses to accomplish the demand for individual clients and individual project. The sub-contractors argued that in many cases these extra documentations and OHS certificates are inessential to ensure the quality service. So, a standardized OHS management procedure (which could be applicable for all the clients) can easily solve this problem. In conclusion, this study highlights that i) development of a harmonized OHS standard applicable for all the operators and turbine suppliers, ii) encouragement of technicians’ active participation in the OHS management, iii) development of a good safety leadership, and, iv) sharing of experiences among the stakeholders (specially operators-operators-sub contractors) are the most vital strategies to overcome the existing challenges and to achieve the goal of 'zero accident/harm' in the offshore wind industry.Keywords: green energy, offshore, safety, Denmark
Procedia PDF Downloads 214433 Synthesis of LiMₓMn₂₋ₓO₄ Doped Co, Ni, Cr and Its Characterization as Lithium Battery Cathode
Authors: Dyah Purwaningsih, Roto Roto, Hari Sutrisno
Abstract:
Manganese dioxide (MnO₂) and its derivatives are among the most widely used materials for the positive electrode in both primary and rechargeable lithium batteries. The MnO₂ derivative compound of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is one of the leading candidates for positive electrode materials in lithium batteries as it is abundant, low cost and environmentally friendly. Over the years, synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) has been carried out using various methods including sol-gel, gas condensation, spray pyrolysis, and ceramics. Problems with these various methods persist including high cost (so commercially inapplicable) and must be done at high temperature (environmentally unfriendly). This research aims to: (1) synthesize LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) by reflux technique; (2) develop microstructure analysis method from XRD Powder LiMₓMn₂₋ₓO₄ data with the two-stage method; (3) study the electrical conductivity of LiMₓMn₂₋ₓO₄. This research developed the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) with reflux. The materials consisting of Mn(CH₃COOH)₂. 4H₂O and Na₂S₂O₈ were refluxed for 10 hours at 120°C to form β-MnO₂. The doping of Co, Ni and Cr were carried out using solid-state method with LiOH to form LiMₓMn₂₋ₓO₄. The instruments used included XRD, SEM-EDX, XPS, TEM, SAA, TG/DTA, FTIR, LCR meter and eight-channel battery analyzer. Microstructure analysis of LiMₓMn₂₋ₓO₄ was carried out on XRD powder data by two-stage method using FullProf program integrated into WinPlotR and Oscail Program as well as on binding energy data from XPS. The morphology of LiMₓMn₂₋ₓO₄ was studied with SEM-EDX, TEM, and SAA. The thermal stability test was performed with TG/DTA, the electrical conductivity was studied from the LCR meter data. The specific capacity of LiMₓMn₂₋ₓO₄ as lithium battery cathode was tested using an eight-channel battery analyzer. The results showed that the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) was successfully carried out by reflux. The optimal temperature of calcination is 750°C. XRD characterization shows that LiMn₂O₄ has a cubic crystal structure with Fd3m space group. By using the CheckCell in the WinPlotr, the increase of Li/Mn mole ratio does not result in changes in the LiMn₂O₄ crystal structure. The doping of Co, Ni and Cr on LiMₓMn₂₋ₓO₄ (x = 0.02; 0.04; 0; 0.6; 0.08; 0.10) does not change the cubic crystal structure of Fd3m. All the formed crystals are polycrystals with the size of 100-450 nm. Characterization of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) microstructure by two-stage method shows the shrinkage of lattice parameter and cell volume. Based on its range of capacitance, the conductivity obtained at LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is an ionic conductivity with varying capacitance. The specific battery capacity at a voltage of 4799.7 mV for LiMn₂O₄; Li₁.₀₈Mn₁.₉₂O₄; LiCo₀.₁Mn₁.₉O₄; LiNi₀.₁Mn₁.₉O₄ and LiCr₀.₁Mn₁.₉O₄ are 88.62 mAh/g; 2.73 mAh/g; 89.39 mAh/g; 85.15 mAh/g; and 1.48 mAh/g respectively.Keywords: LiMₓMn₂₋ₓO₄, solid-state, reflux, two-stage method, ionic conductivity, specific capacity
Procedia PDF Downloads 194432 Home-Based Care with Follow-Up at Outpatient Unit or Community-Follow-Up Center with/without Food Supplementation and/or Psychosocial Stimulation of Children with Moderate Acute Malnutrition in Bangladesh
Authors: Md Iqbal Hossain, Tahmeed Ahmed, Kenneth H. Brown
Abstract:
Objective: To assess the effect of community-based follow up, with or without food-supplementation and/or psychosocial stimulation, as an alternative to current hospital-based follow-up of children with moderate-acute-malnutrition (WHZ < -2 to -3) (MAM). Design/methods: The study was conducted at the ICDDR,B Dhaka Hospital and in four urban primary health care centers of Dhaka, Bangladesh during 2005-2007. The efficacy of five different randomly assigned interventions was compared with respect to the rate of completion of follow-up, growth and morbidity in 227 MAM children aged 6-24 months who were initially treated at ICDDR,B for diarrhea and/or other morbidities. The interventions were: 1) Fortnightly follow-up care (FFC) at the ICDDR,B’s outpatient-unit, including growth monitoring, health education, and micro-nutrient supplementation (H-C, n=49). 2) FFC at community follow-up unit (CNFU) [established in the existing urban primary health-care centers close to the residence of the child] but received the same regimen as H-C (C-C, n=53). 3) As per C-C plus cereal-based supplementary food (SF) (C-SF, n=49). The SF packets were distributed on recruitment and at every visit in CNFU [@1 packet/day for 6–11 and 2 packets/day for 12-24 month old children. Each packet contained 20g toasted rice-powder, 10g toasted lentil-powder, 5g molasses, and 3g soy bean oil, to provide a total of ~ 150kcal with 11% energy from protein]. 4) As per C-C plus psychosocial stimulation (PS) (C-PS, n=43). PS consisted of child-stimulation and parental-counseling conducted by trained health workers. 5) As per C-C plus both SF+PS (C-SF+PS, n=33). Results: A total of 227children (48.5% female), with a mean ± SD age of 12.6 ±3.8 months, and WHZ of - 2.53±0.28 enrolled. Baseline characteristics did not differ by treatment group. The rate of spontaneous attendance at scheduled follow-up visits gradually decreased in all groups. Follow-up attendance and gain in weight and length were greater in groups C-SF, C-SF+PS, and C-PS than C-C, and these indicators were observed least in H-C. Children in the H-C group more often suffered from diarrhea (25 % vs. 4-9%) and fever (28% vs. 8-11%) than other groups (p < 0.05). Children who attended at least five of the total six scheduled follow-up visits gained more in weight (median: 0.86 vs. 0.62 kg, p=0.002), length (median: 2.4 vs. 2.0 cm, p=0.009) than those who attended fewer. Conclusions: Community-based service delivery, especially including supplementary food with or without psychosocial stimulation, permits better rehabilitation of children with MAM compared to current hospital outpatients-based care. By scaling the community-based follow-up including food supplementation with or without psychosocial stimulation, it will be possible to rehabilitate a greater number of MAM children in a better way.Keywords: community-based management, moderate acute malnutrition, psychosocial stimulation, supplementary food
Procedia PDF Downloads 441431 Application of Nanoparticles on Surface of Commercial Carbon-Based Adsorbent for Removal of Contaminants from Water
Authors: Ahmad Kayvani Fard, Gordon Mckay, Muataz Hussien
Abstract:
Adsorption/sorption is believed to be one of the optimal processes for the removal of heavy metals from water due to its low operational and capital cost as well as its high removal efficiency. Different materials have been reported in literature as adsorbent for heavy metal removal in waste water such as natural sorbents, organic polymers (synthetic) and mineral materials (inorganic). The selection of adsorbents and development of new functional materials that can achieve good removal of heavy metals from water is an important practice and depends on many factors, such as the availability of the material, cost of material, and material safety and etc. In this study we reported the synthesis of doped Activated carbon and Carbon nanotube (CNT) with different loading of metal oxide nanoparticles such as Fe2O3, Fe3O4, Al2O3, TiO2, SiO2 and Ag nanoparticles and their application in removal of heavy metals, hydrocarbon, and organics from waste water. Commercial AC and CNT with different loadings of mentioned nanoparticle were prepared and effect of pH, adsorbent dosage, sorption kinetic, and concentration effects are studied and optimum condition for removal of heavy metals from water is reported. The prepared composite sorbent is characterized using field emission scanning electron microscopy (FE-SEM), high transmission electron microscopy (HR-TEM), thermogravimetric analysis (TGA), X-ray diffractometer (XRD), the Brunauer, Emmett and Teller (BET) nitrogen adsorption technique, and Zeta potential. The composite materials showed higher removal efficiency and superior adsorption capacity compared to commercially available carbon based adsorbent. The specific surface area of AC increased by 50% reaching up to 2000 m2/g while the CNT specific surface area of CNT increased by more than 8 times reaching value of 890 m2/g. The increased surface area is one of the key parameters along with surface charge of the material determining the removal efficiency and removal efficiency. Moreover, the surface charge density of the impregnated CNT and AC have enhanced significantly where can benefit the adsorption process. The nanoparticles also enhance the catalytic activity of material and reduce the agglomeration and aggregation of material which provides more active site for adsorbing the contaminant from water. Some of the results for treating wastewater includes 100% removal of BTEX, arsenic, strontium, barium, phenolic compounds, and oil from water. The results obtained are promising for the use of AC and CNT loaded with metal oxide nanoparticle in treatment and pretreatment of waste water and produced water before desalination process. Adsorption can be very efficient with low energy consumption and economic feasibility.Keywords: carbon nanotube, activated carbon, adsorption, heavy metal, water treatment
Procedia PDF Downloads 234430 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 354429 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves
Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann
Abstract:
Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves
Procedia PDF Downloads 46428 Team Teaching versus Traditional Pedagogical Method
Authors: L. M. H. Mustonen, S. A. Heikkilä
Abstract:
The focus of the paper is to describe team teaching as a HAMK’s pedagogical method, and its impacts to the teachers work. Background: Traditionally it is thought that teaching is a job where one mostly works alone. More and more teachers feel that their work is getting more stressful. Solutions to these problems have been sought in Häme University of Applied sciences’ (From now on referred to as HAMK). HAMK has made a strategic change to move to the group oriented working of teachers. Instead of isolated study courses, there are now larger 15 credits study modules. Implementation: As examples of the method, two cases are presented: technical project module and summer studies module, which was integrated into the EU development project called Energy Efficiency with Precise Control. In autumn 2017, technical project will be implemented third time. There are at least three teachers involved in it and it is the first module of the new students. Main focus is to learn the basic skills of project working. From communicational viewpoint, they learn the basics of written and oral reporting and the basics of video reporting skills. According to our quality control system, the need for the development is evaluated in the end of the module. There are always some differences in each implementation but the basics are the same. The other case summer studies 2017 is new and part of a larger EU project. For the first time, we took a larger group of first to third year students from different study programmes to the summer studies. The students learned professional skills and also skills from different fields of study, international cooperation, and communication skills. Benefits and challenges: After three years, it is possible to consider what the changes mean in the everyday work of the teachers - and of course – what it means to students and the learning process. The perspective is HAMK’s electrical and automation study programme: At first, the change always means more work. The routines born after many years and the course material used for years may not be valid anymore. Teachers are teaching in modules simultaneously and often with some subjects overlapping. Finding the time to plan the modules together is often difficult. The essential benefit is that the learning outcomes have improved. This can be seen in the feedback given by both the teachers and the students. Conclusions: A new type of working environment is being born. A team of teachers designs a module that matches the objectives and ponders the answers to such questions as what are the knowledge-based targets of the module? Which pedagogical solutions will achieve the desired results? At what point do multiple teachers instruct the class together? How is the module evaluated? How can the module be developed further for the next execution? The team discusses openly and finds the solutions. Collegiate responsibility and support are always present. These are strengthening factors of the new communal university teaching culture. They are also strong sources of pleasure of work.Keywords: pedagogical development, summer studies, team teaching, well-being at work
Procedia PDF Downloads 109427 Geochemistry and Petrogenesis of High-K Calc-Alkaline Granitic Rocks of Song, Hawal Massif, N. E. Nigeria
Authors: Ismaila Haruna
Abstract:
The global downfall in fossil energy prices and dwindling oil reserves in Nigeria has ignited interest in the search for alternative sources of foreign income for the country. Solid minerals, particularly Uranium and other base metals like Lead and Zinc have been considered as potentially good options. Several occurrences of this mineral have been discovered in both the sedimentary and granitic rocks of the Hawal and Adamawa Massifs as well as in the adjoining Benue Trough in northeastern Nigeria. However, the paucity of geochemical data and consequent poor petrogenetic knowledge of the granitoids in this region has made exploration works difficult. Song, a small area within the Hawal Massif, was mapped and the collected samples chemically determined in Activation Laboratory, Canada through fusion dissolution technique of Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Field mapping results show that the area is underlain by Granites, diorites with pockets of gneisses and pegmatites and that these rocks consists of microcline, quartz, plagioclase, biotite, hornblende, pyroxene and accessory apatite, zircon, sphene, magnetite and opaques in various proportions. Geochemical data show continous compositional variation from diorite to granites within silica range of 52.69 to 76.04 wt %. Plot of the data on various Harker variation diagrams show distinct evolutionary trends from diorites to granites indicated by decreasing CaO, Fe2O3, MnO, MgO, Ti2O, and increasing K2O with increasing silica. This pattern is reflected in trace elements data which, in general, decrease from diorite to the granites with rising Rb and K. Tectonic, triangular and other diagrams, indicate high-K calc-alkaline trends, syn-collisional granite signatures, I-type characteristics, with CNK/A of less than 1.1 (minimum of 0.58 and maximum of 0.94) and strong potassic character (K2O/Na2O˃1). However, only the granites are slightly peraluminous containing high silica percentage (68.46 to 76.04), K2O (2.71 to 6.16 wt %) with low CaO (1.88 on the average). Chondrite normalised rare earth elements trends indicate strongly fractionated REEs and enriched LREEs with slightly increasing negative Eu anomaly from the diorite to the granite. On the basis of field and geochemical data, the granitoids are interpreted to be high-K calc-alkaline, I-type, formed as a result of hybridization between mantle-derived magma and continental source materials (probably older meta-sediments) in a syn-collisional tectonic setting.Keywords: geochemistry, granite, Hawal Massif, Nigeria, petrogenesis, song
Procedia PDF Downloads 239426 Climate Change, Women's Labour Markets and Domestic Work in Mexico
Authors: Luis Enrique Escalante Ochoa
Abstract:
This paper attempts to assess the impacts of Climate change (CC) on inequalities in the labour market. CC will have the most serious effects on some vulnerable economic sectors, such as agriculture, livestock or tourism, but also on the most vulnerable population groups. The objective of this research is to evaluate the impact of CC on the labour market and particularly on Mexican women. Influential documents such as the synthesis reports produced by the Intergovernmental Panel on Climate Change (IPCC) in 2007 and 2014 revived a global effort to counteract the effects of CC, called for an analysis of the impacts on vulnerable socio-economic groups and on economic activities, and for the development of decision-making tools to enable policy and other decisions based on the complexity of the world in relation to climate change, taking into account socio-economic attributes. We follow up this suggestion and determine the impact of CC on vulnerable populations in the Mexican labour market, taking into account two attributes (gender and level of qualification of workers). Most studies have focused on the effects of CC on the agricultural sector, as it is considered a highly vulnerable economic sector to the effects of climate variability. This research seeks to contribute to the existing literature taking into account, in addition to the agricultural sector, other sectors such as tourism, water availability, and energy that are of vital importance to the Mexican economy. Likewise, the effects of climate change will be extended to the labour market and specifically to women who in some cases have been left out. The studies are sceptical about the impact of CC on the female labour market because of the perverse effects on women's domestic work, which are too often omitted from analyses. This work will contribute to the literature by integrating domestic work, which in the case of Mexico is much higher among women than among men (80.9% vs. 19.1%), according to the 2009 time use survey. This study is relevant since it will allow us to analyse impacts of climate change not only in the labour market of the formal economy, but also in the non-market sphere. Likewise, we consider that including the gender dimension is valid for the Mexican economy as it is a country with high degrees of gender inequality in the labour market. In the OECD economic study for Mexico (2017), the low labour participation of Mexican women is highlighted. Although participation has increased substantially in recent years (from 36% in 1990 to 47% in 2017), it remains low compared to the OECD average where women participate around 70% of the labour market. According to Mexico's 2009 time use survey, domestic work represents about 13% of the total time available. Understanding the interdependence between the market and non-market spheres, and the gender division of labour within them is the necessary premise for any economic analysis aimed at promoting gender equality and inclusive growth.Keywords: climate change, labour market, domestic work, rural sector
Procedia PDF Downloads 132425 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy
Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury
Abstract:
Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling
Procedia PDF Downloads 73424 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 203423 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 160422 Growth and Bone Health in Children following Liver Transplantation
Authors: Faris Alkhalil, Rana Bitar, Amer Azaz, Hisham Natour, Noora Almeraikhi, Mohamad Miqdady
Abstract:
Background: Children with liver transplantation are achieving very good survival and so there is now a need to concentrate on achieving good health in these patients and preventing disease. Immunosuppressive medications have side effects that need to be monitored and if possible avoided. Glucocorticoids and calcineurin inhibitors are detrimental to bone and mineral homeostasis in addition steroids can also affect linear growth. Steroid sparing regimes in renal transplant children has shown to improve children’s height. Aim: We aim to review the growth and bone health of children post liver transplant by measuring bone mineral density (BMD) using dual energy X-ray absorptiometry (DEXA) scan and assessing if there is a clear link between poor growth and impaired bone health and use of long term steroids. Subjects and Methods: This is a single centre retrospective Cohort study, we reviewed the medical notes of children (0-16 years) who underwent a liver transplantation between November 2000 to November 2016 and currently being followed at our centre. Results: 39 patients were identified (25 males and 14 females), the median transplant age was 2 years (range 9 months - 16 years), and the median follow up was 6 years. Four patients received a combined transplant, 2 kidney and liver transplant and 2 received a liver and small bowel transplant. The indications for transplant included, Biliary Atresia (31%), Acute Liver failure (18%), Progressive Familial Intrahepatic Cholestasis (15%), transplantable metabolic disease (10%), TPN related liver disease (8%), Primary Hyperoxaluria (5%), Hepatocellular carcinoma (3%) and other causes (10%). 36 patients (95%) were on a calcineurin inhibitor (34 patients were on Tacrolimus and 2 on Cyclosporin). The other three patients were on Sirolimus. Low dose long-term steroids was used in 21% of the patients. A considerable proportion of the patients had poor growth. 15% were below the 3rd centile for weight for age and 21% were below the 3rd centile for height for age. Most of our patients with poor growth were not on long term steroids. 49% of patients had a DEXA scan post transplantation. 21% of these children had low bone mineral density, one patient had met osteoporosis criteria with a vertebral fracture. Most of our patients with impaired bone health were not on long term steroids. 20% of the patients who did not undergo a DEXA scan developed long bone fractures and 50% of them were on long term steroid use which may suggest impaired bone health in these patients. Summary and Conclusion: The incidence of impaired bone health, although studied in limited number of patients; was high. Early recognition and treatment should be instituted to avoid fractures and improve bone health. Many of the patients were below the 3rd centile for weight and height however there was no clear relationship between steroid use and impaired bone health, reduced weight and reduced linear height.Keywords: bone, growth, pediatric, liver, transplantation
Procedia PDF Downloads 279