Search results for: flow and heat transfer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8708

Search results for: flow and heat transfer

818 Hydrochemistry and Stable Isotopes (ẟ18O and ẟ2H) Tools Applied to the Study of Karst Aquifers in Wonderfonteinspruit Valley: North West, South Africa

Authors: Naziha Mokadem, Rainier Dennis, Ingrid Dennis

Abstract:

In South Africa, Karst aquifers are receiving greater attention since they provide large supplies of water which is used for domestic and agricultural purposes as well as for industry. Accordingly, a better insight into the origin of water mineralization and the geochemical processes controlling the recharge of the aquifer is crucial. Analyses of geochemical and environmental isotopes could lead to relevant information regarding karstification and infiltration processes, groundwater chemistry and isotopy. A study was conducted in a typical karst landscape of Wonderfonteinspruit catchment, also known as Wonderfonteinspruit Valley in North-western -South Africa. Furthermore, fifty-two samples were collected from (35 boreholes, 5 surface waters, 4 Dams, 4 springs, 1 canal, 2 pipelines, 1 cave) within the study area for hydrochemistry and 2H and 18O analysis. The determination of the anions (Cl-, SO42-, NO2, NO3-) were performed using Metrohm ion chromatography, model: 761 compact IC, with a precision of ± 0.001 mg/l. While, the cations (Na+, Mg2+, K+, Ca2+) were determined using Metrohm ion chromatography, Model: ICP-MS 7500 series. The alkalinity (Alk) was determined by pH meter with volumetric titration using HCL to pH 4.5; 4.2; and 8.2. In addition, 18O and 2H relative to the Vienna-Standard Mean Ocean Water (RVSMOW), were determined by picarro L2130-I Isotopic H2O (Cavity Ringdown laser spectrometer, Picarro Ltd). The hydrochemical analysis of Wonderfonteinspruit groundwater showed a dominance of the cations Ca-Mg and the anion HCO3. Piper diagram shows that the groundwater sample of study area is characterized by four hydrochemical facies: Two main groups: (1) Ca–Mg–Cl–SO4; (2) Ca–Mg–HCO3 and two minor groups: (3) Ca–Mg–Cl; (4) Na–K–HCO3. The majority of boreholes of Malmani (Transvaal Supergroup) aquifer are plotted in Ca–Mg–HCO3.Oxygen-18 (18O‰SMOW) and deuterium (D‰SMOW) isotopic data indicate that the aquifer’s recharge is influenced by two phenomena; precipitation rates for most of the samples and river flow (Wonderfonteinspruit, Middelvieinspruit, Renfonteinspruit) for some samples.

Keywords: South Africa, Wonderfonteinspruit Valley, isotopic, hydrochemical, carbonate aquifers

Procedia PDF Downloads 154
817 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 89
816 Angiogenic, Cytoprotective, and Immunosuppressive Properties of Human Amnion and Chorion-Derived Mesenchymal Stem Cells

Authors: Kenichi Yamahara, Makiko Ohshima, Shunsuke Ohnishi, Hidetoshi Tsuda, Akihiko Taguchi, Toshihiro Soma, Hiroyasu Ogawa, Jun Yoshimatsu, Tomoaki Ikeda

Abstract:

We have previously reported the therapeutic potential of rat fetal membrane(FM)-derived mesenchymal stem cells (MSCs) using various rat models including hindlimb ischemia, autoimmune myocarditis, glomerulonephritis, renal ischemia-reperfusion injury, and myocardial infarction. In this study, 1) we isolated and characterized MSCs from human amnion and chorion; 2) we examined their differences in the expression profile of growth factors and cytokines; and 3) we investigated the therapeutic potential and difference of these MSCs using murine hindlimb ischemia and acute graft-versus-host disease (GVHD) models. Isolated MSCs from both amnion and chorion layers of FM showed similar morphological appearance, multipotency, and cell-surface antigen expression. Conditioned media obtained from amnion- and chorion-derived MSCs inhibited cell death caused by serum starvation or hypoxia in endothelial cells and cardiomyocytes. Amnion and chorion MSCs secreted significant amounts of angiogenic factors including HGF, IGF-1, VEGF, and bFGF, although differences in the cellular expression profile of these soluble factors were observed. Transplantation of human amnion or chorion MSCs significantly increased blood flow and capillary density in a murine hindlimb ischemia model. In addition, compared to human chorion MSCs, human amnion MSCs markedly reduced T-lymphocyte proliferation with the enhanced secretion of PGE2, and improved the pathological situation of a mouse model of GVHD disease. Our results highlight that human amnionand chorion-derived MSCs, which showed differences in their soluble factor secretion and angiogenic/immuno-suppressive function, could be ideal cell sources for regenerative medicine.

Keywords: amnion, chorion, fetal membrane, mesenchymal stem cells

Procedia PDF Downloads 416
815 A New Approach for Preparation of Super Absorbent Polymers: In-Situ Surface Cross-Linking

Authors: Reyhan Özdoğan, Mithat Çelebi, Özgür Ceylan, Mehmet Arif Kaya

Abstract:

Super absorbent polymers (SAPs) are defined as materials that can absorb huge amount of water or aqueous solution in comparison to their own mass and retain in their lightly cross-linked structure. SAPs were produced from water soluble monomers via polymerization subsequently controlled crosslinking. SAPs are generally used for water absorbing applications such as baby diapers, patient or elder pads and other hygienic product industries. Crosslinking density (CD) of SAP structure is an essential factor for water absortion capacity (WAC). Low internal CD leads to high WAC values and vice versa. However, SAPs have low CD and high swelling capacities and tend to disintegrate when pressure is applied upon them, so SAPs under load cannot absorb liquids effectively. In order to prevent this undesired situation and to obtain suitable SAP structures having high swelling capacity and ability to work under load, surface crosslinking can be the answer. In industry, these superabsorbent gels are mostly produced via solution polymerization and then they need to be dried, grinded, sized, post polymerized and finally surface croslinked (involves spraying of a crosslinking solution onto dried and grinded SAP particles, and then curing by heat). It can easily be seen that these steps are time consuming and should be handled carefully for the desired final product. If we could synthesize desired final SAPs using less processes it will help reducing time and production costs which are very important for any industries. In this study, synthesis of SAPs were achieved successfully by inverse suspension (Pickering type) polymerization and subsequently in-situ surface cross-linking via using proper surfactants in high boiling point solvents. Our one-pot synthesis of surface cross-linked SAPs invovles only one-step for preparation, thus it can be said that this technique exhibits more preferable characteristic for the industry in comparison to conventional methods due to its one-step easy process. Effects of different surface crosslinking agents onto properties of poly(acrylic acid-co-sodium acrylate) based SAPs are investigated. Surface crosslink degrees are evaluated by swelling under load (SUL) test. It was determined water absorption capacities of obtained SAPs decrease with the increasing surface crosslink density while their mechanic properties are improved.

Keywords: inverse suspension polymerization, polyacrylic acid, super absorbent polymers (SAPs), surface crosslinking, sodium polyacrylate

Procedia PDF Downloads 323
814 H2/He and H2O/He Separation Experiments with Zeolite Membranes for Nuclear Fusion Applications

Authors: Rodrigo Antunes, Olga Borisevich, David Demange

Abstract:

In future nuclear fusion reactors, tritium self-sufficiency will be ensured by tritium (3H) production via reactions between the fusion neutrons and lithium. To favor tritium breeding, a neutron multiplier must also be used. Both tritium breeder and neutron multiplier will be placed in the so-called Breeding Blanket (BB). For the European Helium-Cooled Pebble Bed (HCPB) BB concept, the tritium production and neutron multiplication will be ensured by neutron bombardment of Li4SiO4 and Be pebbles, respectively. The produced tritium is extracted from the pebbles by purging them with large flows of He (~ 104 Nm3h-1), doped with small amounts of H2 (~ 0.1 vol%) to promote tritium extraction via isotopic exchange (producing HT). Due to the presence of oxygen in the pebbles, production of tritiated water is unavoidable. Therefore, the purging gas downstream of the BB will be composed by Q2/Q2O/He (Q = 1H, 2H, 3H), with Q2/Q2O down to ppm levels, which must be further processed for tritium recovery. A two-stage continuous approach, where zeolite membranes (ZMs) are followed by a catalytic membrane reactor (CMR), has been recently proposed to fulfil this task. The tritium recovery from Q2/Q2O/He is ensured by the CMR, that requires a reduction of the gas flow coming from the BB and a pre-concentration of Q2 and Q2O to be efficient. For this reason, and to keep this stage with reasonable dimensions, ZMs are required upfront to reduce as much as possible the He flows and concentrate the Q2/Q2O species. Therefore, experimental activities have been carried out at the Tritium Laboratory Karlsruhe (TLK) to test the separation performances of different zeolite membranes for H2/H2O/He. First experiments have been performed with binary mixtures of H2/He and H2O/He with commercial MFI-ZSM5 and NaA zeolite-type membranes. Only the MFI-ZSM5 demonstrated selectivity towards H2, with a separation factor around 1.5, and H2 permeances around 0.72 µmolm-2s-1Pa-1, rather independent for feed concentrations in the range 0.1 vol%-10 vol% H2/He. The experiments with H2O/He have demonstrated that the separation factor towards H2O is highly dependent on the feed concentration and temperature. For instance, at 0.2 vol% H2O/He the separation factor with NaA is below 2 and around 1000 at 5 vol% H2O/He, at 30°C. Overall, both membranes demonstrated complementary results at equivalent temperatures. In fact, at low feed concentrations ( ≤ 1 vol% H2O/He) MFI-ZSM5 separates better than NaA, whereas the latter has higher separation factors for higher inlet water content ( ≥ 5 vol% H2O/He). In this contribution, the results obtained with both MFI-ZSM5 and NaA membranes for H2/He and H2O/H2 mixtures at different concentrations and temperatures are compared and discussed.

Keywords: nuclear fusion, gas separation, tritium processes, zeolite membranes

Procedia PDF Downloads 288
813 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk

Authors: Moses Jenkins

Abstract:

Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.

Keywords: insulation, condensation, masonry, historic

Procedia PDF Downloads 173
812 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media

Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia

Abstract:

Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.

Keywords: CFD, reactor, ultrasound, wastewater

Procedia PDF Downloads 190
811 Sorghum Polyphenols Encapsulated by Spray Drying, Using Modified Starches as Wall Materials

Authors: Adriana Garcia G., Alberto A. Escobar P., Amira D. Calvo L., Gabriel Lizama U., Alejandro Zepeda P., Fernando Martínez B., Susana Rincón A.

Abstract:

Different studies have recently been focused on the use of antioxidants such as polyphenols because of to its anticarcinogenic capacity. However, these compounds are highly sensible to environmental factors such as light and heat, so lose its long-term stability, besides possess an astringent and bitter taste. Nevertheless, the polyphenols can be protected by microcapsule formulation. In this sense, a rich source of polyphenols is sorghum, besides presenting a high starch content. Due to the above, the aim of this work was to obtain modified starches from sorghum by extrusion to encapsulate polyphenols the sorghum by spray drying. Polyphenols were extracted by ethanol solution from sorghum (Pajarero/red) and determined by the method of Folin-Ciocalteu, obtaining GAE at 30 mg/g. Moreover, was extracted starch of sorghum (Sinaloense/white) through wet milling (yield 32 %). The hydrolyzed starch was modified with three treatments: acetic anhydride (2.5g/100g), sodium tripolyphosphate (4g/100g), and sodium tripolyphosphate/ acetic anhydride (2g/1.25g by each 100 g) by extrusion. Processing conditions of extrusion were as follows: barrel temperatures were of 60, 130 and 170 °C at the feeding, transition, and high-pressure extrusion zones, respectively. Analysis of Fourier Transform Infrared spectroscopy (FTIR), showed bands exhibited of acetyl groups (1735 cm-1) and phosphates (1170 cm-1, 910 cm-1 and 525 cm-1), indicating the respective modification of starch. Besides, all modified starches not developed viscosity, which is a characteristic required for use in the encapsulation of polyphenols using the spray drying technique. As result of the modification starch, was obtained a water solubility index (WSI) from 33.8 to 44.8 %, and crystallinity from 8 to 11 %, indicating the destruction of the starch granule. Afterwards, microencapsulation of polyphenols was developed by spray drying, with a blend of 10 g of modified starch, 60 ml polyphenol extract and 30 ml of distilled water. Drying conditions were as follows: inlet air temperature 150 °C ± 1, outlet air temperature 80°C ± 5. As result of the microencapsulation: were obtained yields of 56.8 to 77.4 % and an efficiency of encapsulation from 84.6 to 91.4 %. The FTIR analysis showed evidence of microcapsules loaded with polyphenols in bands 1042 cm-1, 1038 cm-1 and 1148 cm-1. Analysis Differential scanning calorimetry (DSC) showed transition temperatures from 144.1 to 173.9 °C. For the order hand, analysis of Scanning Electron Microscopy (SEM), were observed rounded surfaces with concavities, typical feature of microcapsules produced by spray drying, how result of rapid evaporation of water. Finally, the modified starches were obtained by extrusion with good characteristics for use as cover materials by spray drying, where the phosphorylated starch was the best treatment in this work, according to the encapsulation yield, efficiency, and transition temperature.

Keywords: encapsulation, extrusion, modified starch, polyphenols, spray drying

Procedia PDF Downloads 308
810 Modification of Magneto-Transport Properties of Ferrimagnetic Mn₄N Thin Films by Ni Substitution and Their Magnetic Compensation

Authors: Taro Komori, Toshiki Gushi, Akihito Anzai, Taku Hirose, Kaoru Toko, Shinji Isogami, Takashi Suemasu

Abstract:

Ferrimagnetic antiperovskite Mn₄₋ₓNiₓN thin film exhibits both small saturation magnetization and rather large perpendicular magnetic anisotropy (PMA) when x is small. Both of them are suitable features for application to current induced domain wall motion devices using spin transfer torque (STT). In this work, we successfully grew antiperovskite 30-nm-thick Mn₄₋ₓNiₓN epitaxial thin films on MgO(001) and STO(001) substrates by MBE in order to investigate their crystalline qualities and magnetic and magneto-transport properties. Crystalline qualities were investigated by X-ray diffraction (XRD). The magnetic properties were measured by vibrating sample magnetometer (VSM) at room temperature. Anomalous Hall effect was measured by physical properties measurement system. Both measurements were performed at room temperature. Temperature dependence of magnetization was measured by VSM-Superconducting quantum interference device. XRD patterns indicate epitaxial growth of Mn₄₋ₓNiₓN thin films on both substrates, ones on STO(001) especially have higher c-axis orientation thanks to greater lattice matching. According to VSM measurement, PMA was observed in Mn₄₋ₓNiₓN on MgO(001) when x ≤ 0.25 and on STO(001) when x ≤ 0.5, and MS decreased drastically with x. For example, MS of Mn₃.₉Ni₀.₁N on STO(001) was 47.4 emu/cm³. From the anomalous Hall resistivity (ρAH) of Mn₄₋ₓNiₓN thin films on STO(001) with the magnetic field perpendicular to the plane, we found out Mr/MS was about 1 when x ≤ 0.25, which suggests large magnetic domains in samples and suitable features for DW motion device application. In contrast, such square curves were not observed for Mn₄₋ₓNiₓN on MgO(001), which we attribute to difference in lattice matching. Furthermore, it’s notable that although the sign of ρAH was negative when x = 0 and 0.1, it reversed positive when x = 0.25 and 0.5. The similar reversal occurred for temperature dependence of magnetization. The magnetization of Mn₄₋ₓNiₓN on STO(001) increases with decreasing temperature when x = 0 and 0.1, while it decreases when x = 0.25. We considered that these reversals were caused by magnetic compensation which occurred in Mn₄₋ₓNiₓN between x = 0.1 and 0.25. We expect Mn atoms of Mn₄₋ₓNiₓN crystal have larger magnetic moments than Ni atoms do. The temperature dependence stated above can be explained if we assume that Ni atoms preferentially occupy the corner sites, and their magnetic moments have different temperature dependence from Mn atoms at the face-centered sites. At the compensation point, Mn₄₋ₓNiₓN is expected to show very efficient STT and ultrafast DW motion with small current density. What’s more, if angular momentum compensation is found, the efficiency will be best optimized. In order to prove the magnetic compensation, X-ray magnetic circular dichroism will be performed. Energy dispersive X-ray spectrometry is a candidate method to analyze the accurate composition ratio of samples.

Keywords: compensation, ferrimagnetism, Mn₄N, PMA

Procedia PDF Downloads 134
809 The Politics of Foreign Direct Investment for Socio-Economic Development in Nigeria: An Assessment of the Fourth Republic Strategies (1999 - 2014)

Authors: Muritala Babatunde Hassan

Abstract:

In the contemporary global political economy, foreign direct investment (FDI) is gaining currency on daily basis. Notably, the end of the Cold War has brought about the dominance of neoliberal ideology with its mantra of private-sector-led economy. As such, nation-states now see FDI attraction as an important element in their approach to national development. Governments and policy makers are preoccupying themselves with unraveling the best strategies to not only attract more FDI but also to attain the desired socio-economic development status. In Nigeria, the perceived development potentials of FDI have brought about aggressive hunt for foreign investors, most especially since transition to civilian rule in May 1999. Series of liberal and market oriented strategies are being adopted not only to attract foreign investors but largely to stimulate private sector participation in the economy. It is on this premise that this study interrogates the politics of FDI attraction for domestic development in Nigeria between 1999 and 2014, with the ultimate aim of examining the nexus between regime type and the ability of a state to attract and benefit from FDI. Building its analysis within the framework of institutional utilitarianism, the study posits that the essential FDI strategies for achieving the greatest happiness for the greatest number of Nigerians are political not economic. Both content analysis and descriptive survey methodology were employed in carrying out the study. Content analysis involves desk review of literatures that culminated in the development of the study’s conceptual and theoretical framework of analysis. The study finds no significant relationship between transition to democracy and FDI inflows in Nigeria, as most of the attracted investments during the period of the study were market and resource seeking as was the case during the military regime, thereby contributing minimally to the socio-economic development of the country. It is also found that the country placed much emphasis on liberalization and incentives for FDI attraction at the neglect of improving the domestic investment environment. Consequently, poor state of infrastructure, weak institutional capability and insecurity were identified as the major factors seriously hindering the success of Nigeria in exploiting FDI for domestic development. Given the reality of the currency of FDI as a vector of economic globalization and that Nigeria is trailing the line of private-sector-led approach to development, it is recommended that emphasis should be placed on those measures aimed at improving the infrastructural facilities, building solid institutional framework, enhancing skill and technological transfer and coordinating FDI promotion activities by different agencies and at different levels of government.

Keywords: foreign capital, politics, socio-economic development, FDI attraction strategies

Procedia PDF Downloads 164
808 Best Practical Technique to Drain Recoverable Oil from Unconventional Deep Libyan Oil Reservoir

Authors: Tarek Duzan, Walid Esayed

Abstract:

Fluid flow in porous media is attributed fundamentally to parameters that are controlled by depositional and post-depositional environments. After deposition, digenetic events can act negatively on the reservoir and reduce the effective porosity, thereby making the rock less permeable. Therefore, exploiting hydrocarbons from such resources requires partially altering the rock properties to improve the long-term production rate and enhance the recovery efficiency. In this study, we try to address, firstly, the phenomena of permeability reduction in tight sandstone reservoirs and illustrate the implemented procedures to investigate the problem roots; finally, benchmark the candidate solutions at the field scale and recommend the mitigation strategy for the field development plan. During the study, two investigations have been considered: subsurface analysis using ( PLT ) and Laboratory tests for four candidate wells of the interested reservoir. Based on the above investigations, it was obvious that the Production logging tool (PLT) has shown areas of contribution in the reservoir, which is considered very limited, considering the total reservoir thickness. Also, Alcohol treatment was the first choice to go with for the AA9 well. The well productivity has been relatively restored but not to its initial productivity. Furthermore, Alcohol treatment in the lab was effective and restored permeability in some plugs by 98%, but operationally, the challenge would be the ability to distribute enough alcohol in a wellbore to attain the sweep Efficiency obtained within a laboratory core plug. However, the Second solution, which is based on fracking wells, has shown excellent results, especially for those wells that suffered a high drop in oil production. It is suggested to frac and pack the wells that are already damaged in the Waha field to mitigate the damage and restore productivity back as much as possible. In addition, Critical fluid velocity and its effect on fine sand migration in the reservoir have to be well studied on core samples, and therefore, suitable pressure drawdown will be applied in the reservoir to limit fine sand migration.

Keywords: alcohol treatment, post-depositional environments, permeability, tight sandstone

Procedia PDF Downloads 68
807 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 159
806 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 161
805 The Influence of Firm Characteristics on Profitability: Evidence from Italian Hospitality Industry

Authors: Elisa Menicucci, Guido Paolucci

Abstract:

Purpose: The aim of this paper is to investigate the factors influencing profitability in the Italian hospitality industry during the period 2008-2016. Design/methodology/approach: This study examines the profitability and its determinants using a sample of 2366 Italian hotel firms. First, we use a multidimensional measure of profitability including attributes as return on equity, return on assets and occupancy rate. Second, we examine variables that are potentially related with performance and we sort these into five categories: market variables, business model, ownership structure, management education and control variables. Findings: The results show that financial crisis, business model and ownership structure influence profitability of hotel firms. Specific factors such as the internationalization, location, firm’s declaring accommodation as their primary activity and chain affiliation are associated positively with profitability. We also find that larger hotel firms have higher performance rankings, while hotels with higher operating cash flow volatility, greater sales volatility and a higher occurrence of losses have lower profitability. Research limitations/implications: Findings suggest the importance of considering firm specific factors to evaluate the profitability of a hotel firm. Results also provide evidence for academics to critically evaluate factors that would ensure profitability of hotels in developed countries such as Italy. Practical implications: This investigation offers valuable information and strategic implications for government, tourism policymakers, tourist hotel owners, hoteliers and tourism managers in their decision-making. Originality/value: This paper provides interesting insights into the characteristics and practices of profitable hotels in Italy. Few econometric studies empirically explored the determinants of performance in the European hospitality field so far. Therefore, this paper tries to close an important gap in the existing literature improving the understanding of profitability in the Italian hospitality industry.

Keywords: hotel firms, profitability, determinants, Italian hospitality industry

Procedia PDF Downloads 389
804 An Appraisal of Mitigation and Adaptation Measures under Paris Agreement 2015: Developing Nations' Pie

Authors: Olubisi Friday Oluduro

Abstract:

The Paris Agreement 2015, the result of negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), after Kyoto Protocol expiration, sets a long-term goal of limiting the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels, and of pursuing efforts to limiting this temperature increase to 1.5 degrees Celsius. An advancement on the erstwhile Kyoto Protocol which sets commitments to only a limited number of Parties to reduce their greenhouse gas (GHGs) emissions, it includes the goal to increase the ability to adapt to the adverse impacts of climate change and to make finance flows consistent with a pathway towards low GHGs emissions. For it achieve these goals, the Agreement requires all Parties to undertake efforts towards reaching global peaking of GHG emissions as soon as possible and towards achieving a balance between anthropogenic emissions by sources and removals by sinks in the second half of the twenty-first century. In addition to climate change mitigation, the Agreement aims at enhancing adaptive capacity, strengthening resilience and reducing the vulnerability to climate change in different parts of the world. It acknowledges the importance of addressing loss and damage associated with the adverse of climate change. The Agreement also contains comprehensive provisions on support to be provided to developing countries, which includes finance, technology transfer and capacity building. To ensure that such supports and actions are transparent, the Agreement contains a number reporting provisions, requiring parties to choose the efforts and measures that mostly suit them (Nationally Determined Contributions), providing for a mechanism of assessing progress and increasing global ambition over time by a regular global stocktake. Despite the somewhat global look of the Agreement, it has been fraught with manifold limitations threatening its very existential capability to produce any meaningful result. Considering these obvious limitations some of which were the very cause of the failure of its predecessor—the Kyoto Protocol—such as the non-participation of the United States, non-payment of funds into the various coffers for appropriate strategic purposes, among others. These have left the developing countries largely threatened eve the more, being more vulnerable than the developed countries, which are really responsible for the climate change scourge. The paper seeks to examine the mitigation and adaptation measures under the Paris Agreement 2015, appraise the present situation since the Agreement was concluded and ascertain whether the developing countries have been better or worse off since the Agreement was concluded, and examine why and how, while projecting a way forward in the present circumstance. It would conclude with recommendations towards ameliorating the situation.

Keywords: mitigation, adaptation, climate change, Paris agreement 2015, framework

Procedia PDF Downloads 157
803 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer

Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz

Abstract:

Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.

Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions

Procedia PDF Downloads 145
802 A Unified Model for Predicting Particle Settling Velocity in Pipe, Annulus and Fracture

Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li

Abstract:

Transports of solid particles through the drill pipe, drill string-hole annulus and hydraulically generated fractures are important dynamic processes encountered in oil and gas well drilling and completion operations. Different from particle transport in infinite space, the transports of cuttings, proppants and formation sand are hindered by a finite boundary. Therefore, an accurate description of the particle transport behavior under the bounded wall conditions encountered in drilling and hydraulic fracturing operations is needed to improve drilling safety and efficiency. In this study, the particle settling experiments were carried out to investigate the particle settling behavior in the pipe, annulus and between the parallel plates filled with power-law fluids. Experimental conditions simulated the particle Reynolds number ranges of 0.01-123.87, the dimensionless diameter ranges of 0.20-0.80 and the fluid flow behavior index ranges of 0.48-0.69. Firstly, the wall effect of the annulus is revealed by analyzing the settling process of the particles in the annular geometry with variable inner pipe diameter. Then, the geometric continuity among the pipe, annulus and parallel plates was determined by introducing the ratio of inner diameter to an outer diameter of the annulus. Further, a unified dimensionless diameter was defined to confirm the relationship between the three different geometry in terms of the wall effect. In addition, a dimensionless term independent from the settling velocity was introduced to establish a unified explicit settling velocity model applicable to pipes, annulus and fractures with a mean relative error of 8.71%. An example case study was provided to demonstrate the application of the unified model for predicting particle settling velocity. This paper is the first study of annulus wall effects based on the geometric continuity concept and the unified model presented here will provide theoretical guidance for improved hydraulic design of cuttings transport, proppant placement and sand management operations.

Keywords: wall effect, particle settling velocity, cuttings transport, proppant transport in fracture

Procedia PDF Downloads 160
801 Longitudinal Profile of Antibody Response to SARS-CoV-2 in Patients with Covid-19 in a Setting from Sub–Saharan Africa: A Prospective Longitudinal Study

Authors: Teklay Gebrecherkos

Abstract:

Background: Serological testing for SARS-CoV-2 plays an important role in epidemiological studies, in aiding the diagnosis of COVID-19 and assess vaccine responses. Little is known about the dynamics of SARS-CoV-2 serology in African settings. Here, we aimed to characterize the longitudinal antibody response profile to SARS-CoV-2 in Ethiopia. Methods: In this prospective study, a total of 102 PCR-confirmed COVID-19 patients were enrolled. We obtained 802 plasma samples collected serially. SARS-CoV-2 antibodies were determined using four lateral flow immune assays (LFIAs) and an electrochemiluminescent immunoassay. We determined longitudinal antibody response to SARS-CoV-2 as well as seroconversion dynamics. Results: Serological positivity rate ranged between 12%-91%, depending on timing after symptom onset. There was no difference in the positivity rate between severe and non-severe COVID-19 cases. The specificity ranged between 90%-97%. Agreement between different assays ranged between 84%-92%. The estimated positive predictive value (PPV) for IgM or IgG in a scenario with seroprevalence at 5% varies from 33% to 58%. Nonetheless, when the population seroprevalence increases to 25% and 50%, there is a corresponding increase in the estimated PPVs. The estimated negative-predictive value (NPV) in a low seroprevalence scenario (5%) is high (>99%). However, the estimated NPV in a high seroprevalence scenario (50%) for IgM or IgG is reduced significantly from 80% to 85%. Overall, 28/102 (27.5%) seroconverted by one or more assays tested within a median time of 11 (IQR: 9–15) days post symptom onset. The median seroconversion time among symptomatic cases tended to be shorter when compared to asymptomatic patients [9 (IQR: 6–11) vs. 15 (IQR: 13–21) days; p = 0.002]. Overall, seroconversion reached 100% 5.5 weeks after the onset of symptoms. Notably, of the remaining 74 COVID-19 patients included in the cohort, 64 (62.8%) were positive for antibodies at the time of enrollment, and 10 (9.8%) patients failed to mount a detectable antibody response by any of the assays tested during follow-up. Conclusions: Longitudinal assessment of antibody response in African COVID-19 patients revealed heterogeneous responses. This underscores the need for a comprehensive evaluation of serum assays before implementation. Factors associated with failure to seroconvert need further research.

Keywords: COVID-19, antibody, rapid diagnostic tests, ethiopia

Procedia PDF Downloads 82
800 Role of Autophagic Lysosome Reformation for Cell Viability in an in vitro Infection Model

Authors: Muhammad Awais Afzal, Lorena Tuchscherr De Hauschopp, Christian Hübner

Abstract:

Introduction: Autophagy is an evolutionarily conserved lysosome-dependent degradation pathway, which can be induced by extrinsic and intrinsic stressors in living systems to adapt to fluctuating environmental conditions. In the context of inflammatory stress, autophagy contributes to the elimination of invading pathogens, the regulation of innate and adaptive immune mechanisms, and regulation of inflammasome activity as well as tissue damage repair. Lysosomes can be recycled from autolysosomes by the process of autophagic lysosome reformation (ALR), which depends on the presence of several proteins including Spatacsin. Thus ALR contributes to the replenishment of lysosomes that are available for fusion with autophagosomes in situations of increased autophagic turnover, e.g., during bacterial infections, inflammatory stress or sepsis. Objectives: We aimed to assess whether ALR plays a role for cell survival in an in-vitro bacterial infection model. Methods: Mouse embryonic fibroblasts (MEFs) were isolated from wild-type mice and Spatacsin (Spg11-/-) knockout mice. Wild-type MEFs and Spg11-/- MEFs were infected with Staphylococcus aureus (multiplication of infection (MOI) used was 10). After 8 and 16 hours of infection, cell viability was assessed on BD flow cytometer through propidium iodide intake. Bacterial intake by cells was also calculated by plating cell lysates on blood agar plates. Results: in-vitro infection of MEFs with Staphylococcus aureus showed a marked decrease of cell viability in ALR deficient Spatacsin knockout (Spg11-/-) MEFs after 16 hours of infection as compared to wild-type MEFs (n=3 independent experiments; p < 0.0001) although no difference was observed for bacterial intake by both genotypes. Conclusion: Suggesting that ALR is important for the defense of invading pathogens e.g. S. aureus, we observed a marked increase of cell death in an in-vitro infection model in cells with compromised ALR.

Keywords: autophagy, autophagic lysosome reformation, bacterial infections, Staphylococcus aureus

Procedia PDF Downloads 144
799 Psychometric Properties of Several New Positive Psychology Measures

Authors: Lauren Benyo Linford, Jared Warren, Jeremy Bekker, Gus Salazar

Abstract:

In order to accurately identify areas needing improvement and track growth, the availability of valid and reliable measures of different facets of well-being is vital. Because no specific measures currently exist for many facets of well-being, the purpose of this study was to construct and validate measures of the following constructs: Purpose, Values, Mindfulness, Savoring, Gratitude, Optimism, Supportive Relationships, Interconnectedness, Compassion, Community, Contribution, Engaged Living, Personal Growth, Flow Experiences, Self-Compassion, Exercise, Meditation, and an overall measure of subjective well-being—the Survey on Flourishing. In order to assess their psychometric properties, each measure was examined for internal consistency estimates, and items with poor item-test correlations were dropped. Additionally, the convergent validity of the Survey on Flourishing (SURF) was assessed. Total score correlations of SURF and other commonly used measures of well-being such as the Positive and Negative Affect Schedule (PANAS), The Satisfaction with Life Scale (SWLS), the PERMA Profiler (measure of Positive Emotion, Engagement, Relationships, Meaning, and Achievement) were examined to establish convergent validity. The Kessler Psychological distress scale (K6) was also included to determine the divergent validity of the SURF measure. Three week test-retest reliability was also assessed for the SURF measure. Additionally, normative data from general population samples was collected for both the Self-Compassion and Survey on Flourishing (SURF) measures. The purpose of this study is to introduce each of these measures, divulge the psychometric findings of this study, as well as explore additional psychometric properties of the SURF measure in particular. This study will highlight how these measures can be used in future research exploring these positive psychology constructs. Additionally, this study will discuss the utility of these measures to guide individuals in their use of the online self-directed, self-administered My Best Self 101 positive psychology resources developed by the researchers. The goal of My Best Self 101 is to disseminate real, research-based measures and tools to individuals who are seeking to increase their well-being.

Keywords: measurement, psychometrics, test validation, well-Being

Procedia PDF Downloads 188
798 Immune Modulation and Cytomegalovirus Reactivation in Sepsis-Induced Immunosuppression

Authors: G. Lambe, D. Mansukhani, A. Shetty, S. Khodaiji, C. Rodrigues, F. Kapadia

Abstract:

Introduction: Sepsis is known to cause impairment of both innate and adaptive immunity and involves an early uncontrolled inflammatory response, followed by a protracting immunosuppression phase, which includes decreased expression of cell receptors, T cell anergy and exhaustion, impaired cytokine production, which may cause high risk for secondary infections due to reduced response to antigens. Although human cytomegalovirus (CMV) is widely recognized as a serious viral pathogen in sepsis and immunocompromised patients, the incidence of CMV reactivation in patients with sepsis lacking strong evidence of immunosuppression is not well defined. Therefore, it is important to determine an association between CMV reactivation and sepsis-induced immunosuppression. Aim: To determine the association between incidence of CMV reactivation and immune modulation in sepsis-induced immunosuppression with time. Material and Methods: Ten CMV-seropositive adult patients with severe sepsis were included in this study. Blood samples were collected on Day 0, and further weekly up to 21 days. CMV load was quantified by real-time PCR using plasma. The expression of immunosuppression markers, namely, HLA-DR, PD-1, and regulatory T cells, were determined by flow cytometry using whole blood. Results: At Day 0, no CMV reactivation was observed in 6/10 patients. In these patients, the median length for reactivation was 14 days (range, 7-14 days). The remaining four patients, at Day 0, had a mean viral load of 1802+2599 copies/ml, which increased with time. At Day 21, the mean viral load for all 10 patients was 60949+179700 copies/ml, indicating that viremia increased with the length of stay in the hospital. HLA-DR expression on monocytes significantly increased from Day 0 to Day 7 (p = 0.001), following which no significant change was observed until Day 21, for all patients except 3. In these three patients, HLA-DR expression on monocytes showed a decrease at elevated viral load (>5000 copies/ml), indicating immune suppression. However, the other markers, PD-1 and regulatory T cells, did not show any significant changes. Conclusion: These preliminary findings suggest that CMV reactivation can occur in patients with severe sepsis. In fact, the viral load continued to increase with the length of stay in the hospital. Immune suppression, indicated by decreased expression of HLA-DR alone, was observed in three patients with elevated viral load.

Keywords: CMV reactivation, immune suppression, sepsis immune modulation, CMV viral load

Procedia PDF Downloads 150
797 Ultrasonic Irradiation Synthesis of High-Performance Pd@Copper Nanowires/MultiWalled Carbon Nanotubes-Chitosan Electrocatalyst by Galvanic Replacement toward Ethanol Oxidation in Alkaline Media

Authors: Majid Farsadrouh Rashti, Amir Shafiee Kisomi, Parisa Jahani

Abstract:

The direct ethanol fuel cells (DEFCs) are contemplated as a promising energy source because, In addition to being used in portable electronic devices, it is also used for electric vehicles. The synthesis of bimetallic nanostructures due to their novel optical, catalytic and electronic characteristic which is precisely in contrast to their monometallic counterparts is attracting extensive attention. Galvanic replacement (sometimes is named to as cementation or immersion plating) is an uncomplicated and effective technique for making nanostructures (such as core-shell) of different metals, semiconductors, and their application in DEFCs. The replacement of galvanic does not need any external power supply compared to electrodeposition. In addition, it is different from electroless deposition because there is no need for a reducing agent to replace galvanizing. In this paper, a fast method for the palladium (Pd) wire nanostructures synthesis with the great surface area through galvanic replacement reaction utilizing copper nanowires (CuNWS) as a template by the assistance of ultrasound under room temperature condition is proposed. To evaluate the morphology and composition of Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan, emission scanning electron microscopy, energy dispersive X-ray spectroscopy were applied. In order to measure the phase structure of the electrocatalysts were performed via room temperature X-ray powder diffraction (XRD) applying an X-ray diffractometer. Various electrochemical techniques including chronoamperometry and cyclic voltammetry were utilized for the electrocatalytic activity of ethanol electrooxidation and durability in basic solution. Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst demonstrated substantially enhanced performance and long-term stability for ethanol electrooxidation in the basic solution in comparison to commercial Pd/C that demonstrated the potential in utilizing Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan as efficient catalysts towards ethanol oxidation. Noticeably, the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan presented excellent catalytic activities with a peak current density of 320.73 mAcm² which was 9.5 times more than in comparison to Pd/C (34.2133 mAcm²). Additionally, activation energy thermodynamic and kinetic evaluations revealed that the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst has lower compared to Pd/C which leads to a lower energy barrier and an excellent charge transfer rate towards ethanol oxidation.

Keywords: core-shell structure, electrocatalyst, ethanol oxidation, galvanic replacement reaction

Procedia PDF Downloads 147
796 Effect of Fresh Concrete Curing Methods on Its Compressive Strength

Authors: Xianghe Dai, Dennis Lam, Therese Sheehan, Naveed Rehman, Jie Yang

Abstract:

Concrete is one of the most used construction materials that may be made onsite as fresh concrete and then placed in formwork to produce the desired shapes of structures. It has been recognized that the raw materials and mix proportion of concrete dominate the mechanical characteristics of hardened concrete, and the curing method and environment applied to the concrete in early stages of hardening will significantly influence the concrete properties, such as compressive strength, durability, permeability etc. In construction practice, there are various curing methods to maintain the presence of mixing water throughout the early stages of concrete hardening. They are also beneficial to concrete in hot weather conditions as they provide cooling and prevent the evaporation of water. Such methods include ponding or immersion, spraying or fogging, saturated wet covering etc. Also there are various curing methods that may be implemented to decrease the level of water lost which belongs to the concrete surface, such as putting a layer of impervious paper, plastic sheeting or membrane on the concrete to cover it. In the concrete material laboratory, accelerated strength gain methods supply the concrete with heat and additional moisture by applying live steam, coils that are subject to heating or pads that have been warmed electrically. Currently when determining the mechanical parameters of a concrete, the concrete is usually sampled from fresh concrete on site and then cured and tested in laboratories where standardized curing procedures are adopted. However, in engineering practice, curing procedures in the construction sites after the placing of concrete might be very different from the laboratory criteria, and this includes some standard curing procedures adopted in the laboratory that can’t be applied on site. Sometimes the contractor compromises the curing methods in order to reduce construction costs etc. Obviously the difference between curing procedures adopted in the laboratory and those used on construction sites might over- or under-estimate the real concrete quality. This paper presents the effect of three typical curing methods (air curing, water immersion curing, plastic film curing) and of maintaining concrete in steel moulds on the compressive strength development of normal concrete. In this study, Portland cement with 30% fly ash was used and different curing periods, 7 days, 28 days and 60 days were applied. It was found that the highest compressive strength was observed from concrete samples to which 7-day water immersion curing was applied and from samples maintained in steel moulds up to the testing date. The research results implied that concrete used as infill in steel tubular members might develop a higher strength than predicted by design assumptions based on air curing methods. Wrapping concrete with plastic film as a curing method might delay the concrete strength development in the early stages. Water immersion curing for 7 days might significantly increase the concrete compressive strength.

Keywords: compressive strength, air curing, water immersion curing, plastic film curing, maintaining in steel mould, comparison

Procedia PDF Downloads 293
795 Mitigating Nitrous Oxide Production from Nitritation/Denitritation: Treatment of Centrate from Pig Manure Co-Digestion as a Model

Authors: Lai Peng, Cristina Pintucci, Dries Seuntjens, José Carvajal-Arroyo, Siegfried Vlaeminck

Abstract:

Economic incentives drive the implementation of short-cut nitrogen removal processes such as nitritation/denitritation (Nit/DNit) to manage nitrogen in waste streams devoid of biodegradable organic carbon. However, as any biological nitrogen removal process, the potent greenhouse gas nitrous oxide (N2O) could be emitted from Nit/DNit. Challenges remain in understanding the fundamental mechanisms and development of engineered mitigation strategies for N2O production. To provide answers, this work focuses on manure as a model, the biggest wasted nitrogen mass flow through our economies. A sequencing batch reactor (SBR; 4.5 L) was used treating the centrate (centrifuge supernatant; 2.0 ± 0.11 g N/L of ammonium) from an anaerobic digester processing mainly pig manure, supplemented with a co-substrate. Glycerin was used as external carbon source, a by-product of vegetable oil. Out-selection of nitrite oxidizing bacteria (NOB) was targeted using a combination of low dissolved oxygen (DO) levels (down to 0.5 mg O2/L), high temperature (35ºC) and relatively high free ammonia (FA) (initially 10 mg NH3-N/L). After reaching steady state, the process was able to remove 100% of ammonium with minimum nitrite and nitrate in the effluent, at a reasonably high nitrogen loading rate (0.4 g N/L/d). Substantial N2O emissions (over 15% of the nitrogen loading) were observed at the baseline operational condition, which were even increased under nitrite accumulation and a low organic carbon to nitrogen ratio. Yet, higher DO (~2.2 mg O2/L) lowered aerobic N2O emissions and weakened the dependency of N2O on nitrite concentration, suggesting a shift of N2O production pathway at elevated DO levels. Limiting the greenhouse gas emissions (environmental protection) from such a system could be substantially minimized by increasing the external carbon dosage (a cost factor), but also through the implementation of an intermittent aeration and feeding strategy. Promising steps forward have been presented in this abstract, yet at the conference the insights of ongoing experiments will also be shared.

Keywords: mitigation, nitrous oxide, nitritation/denitritation, pig manure

Procedia PDF Downloads 249
794 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 33
793 Seamounts and Submarine Landslides: Study Case of Island Arcs Area in North of Sulawesi

Authors: Muhammad Arif Rahman, Gamma Abdul Jabbar, Enggar Handra Pangestu, Alfi Syahrin Qadri, Iryan Anugrah Putra, Rizqi Ramadhandi.

Abstract:

Indonesia lies above three major tectonic plates, Indo-Australia plate, Eurasia plate, and Pacific plate. Interactions between those plates resulted in high tectonic and volcanic activities that corelates into high risk of geological hazards in adjacent areas, one of the areas is in North of Sulawesi’s Islands. This case raises a problem in terms of infrastructure in order to mitigate existing infrastructure and various future infrastructures plan. One of the infrastructures that is essentials to enhance telecommunication aspect is submarine fiber optic cable, that has risk to geological hazard. This cable is essential that act as backbone in telecommunication. Damaged fiber optic cables can pose serious problem that make existing signal to be loss and have negative impact to people’s social and economic factor with also decreasing various governmental services performance. Submarine cables are facing challenges in terms of geological hazards, for instance are seamounts activity. Previous studies show that until 2023, five seamounts are identified in North of Sulawesi. Seamounts itself can damage and trigger many activities that can risks submarine cables, one of the examples is submarine landslide. Main focuses of this study are to identify new possible seamounts and submarine landslide path in area North of Sulawesi Islands to help minimize risks pose by those hazards, either to existing or future plan submarine cables. Using bathymetry data, this study conduct slope analysis and use distinctive morphological features to interpret possible seamounts. Then we mapped out valleys in between seamounts and determine where sediments might flow in case of landslide, and to finally, know how it affect submarine cables in the area.

Keywords: bathymetry, geological hazard, mitigation, seamount, submarine cable, submarine landslide, volcanic activity

Procedia PDF Downloads 69
792 An Exploration of Policy-related Documents on District Heating and Cooling in Flanders: A Slow and Bottom-up Process

Authors: Isaura Bonneux

Abstract:

District heating and cooling (DHC) is increasingly recognized as a viable path towards sustainable heating and cooling. While some countries like Sweden and Denmark have a longstanding tradition of DHC, Belgium is lacking behind. The Northern part of Belgium, Flanders, had only a total of 95 heating networks in July 2023. Nevertheless, it is increasingly exploring its possibilities to enhance the scope of DHC. DHC is a complex energy system, requiring a lot of collaboration between various stakeholders on various levels. Therefore, it is of interest to look closer at policy-related documents at the Flemish (regional) level, as these policies set the scene for DHC development in the Flemish region. This kind of analysis has not been undertaken so far. This paper has the following research question: “Who talks about DHC, and in which way and context is DHC discussed in Flemish policy-related documents?” To answer this question, the Overton policy database was used to search and retrieve relevant policy-related documents. Overton retrieves data from governments, think thanks, NGOs, and IGOs. In total, out of the 244 original results, 117 documents between 2009 and 2023 were analyzed. Every selected document included theme keywords, policymaking department(s), date, and document type. These elements were used for quantitative data description and visualization. Further, qualitative content analysis revealed patterns and main themes regarding DHC in Flanders. Four main conclusions can be drawn: First, it is obvious from the timeframe that DHC is a new topic in Flanders with still limited attention; 2014, 2016 and 2017 were the years with the most documents, yet this number is still only 12 documents. In addition, many documents talked about DHC but not much in depth and painted it as a future scenario with a lot of uncertainty around it. The largest part of the issuing government departments had a link to either energy or climate (e.g. Flemish Environmental Agency) or policy (e.g. Socio-Economic Council of Flanders) Second, DHC is mentioned most within an ‘Environment and Sustainability’ context, followed by ‘General Policy and Regulation’. This is intuitive, as DHC is perceived as a sustainable heating and cooling technique and this analysis compromises policy-related documents. Third, Flanders seems mostly interested in using waste or residual heat as a heating source for DHC. The harbors and waste incineration plants are identified as potential and promising supply sources. This approach tries to conciliate environmental and economic incentives. Last, local councils get assigned a central role and the initiative is mostly taken by them. The policy documents and policy advices demonstrate that Flanders opts for a bottom-up organization. As DHC is very dependent on local conditions, this seems a logic step. Nevertheless, this can impede smaller councils to create DHC networks and slow down systematic and fast implementation of DHC throughout Flanders.

Keywords: district heating and cooling, flanders, overton database, policy analysis

Procedia PDF Downloads 44
791 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 126
790 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.

Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment

Procedia PDF Downloads 77
789 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 283