Search results for: testing simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7783

Search results for: testing simulation

193 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches

Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang

Abstract:

Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.

Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis

Procedia PDF Downloads 399
192 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 297
191 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 147
190 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 283
189 An Innovation Decision Process View in an Adoption of Total Laboratory Automation

Authors: Chia-Jung Chen, Yu-Chi Hsu, June-Dong Lin, Kun-Chen Chan, Chieh-Tien Wang, Li-Ching Wu, Chung-Feng Liu

Abstract:

With fast advances in healthcare technology, various total laboratory automation (TLA) processes have been proposed. However, adopting TLA needs quite high funding. This study explores an early adoption experience by Taiwan’s large-scale hospital group, the Chimei Hospital Group (CMG), which owns three branch hospitals (Yongkang, Liouying and Chiali, in order by service scale), based on the five stages of Everett Rogers’ Diffusion Decision Process. 1.Knowledge stage: Over the years, two weaknesses exists in laboratory department of CMG: 1) only a few examination categories (e.g., sugar testing and HbA1c) can now be completed and reported within a day during an outpatient clinical visit; 2) the Yongkang Hospital laboratory space is dispersed across three buildings, resulting in duplicated investment in analysis instruments and inconvenient artificial specimen transportation. Thus, the senior management of the department raised a crucial question, was it time to process the redesign of the laboratory department? 2.Persuasion stage: At the end of 2013, Yongkang Hospital’s new building and restructuring project created a great opportunity for the redesign of the laboratory department. However, not all laboratory colleagues had the consensus for change. Thus, the top managers arranged a series of benchmark visits to stimulate colleagues into being aware of and accepting TLA. Later, the director of the department proposed a formal report to the top management of CMG with the results of the benchmark visits, preliminary feasibility analysis, potential benefits and so on. 3.Decision stage: This TLA suggestion was well-supported by the top management of CMG and, finally, they made a decision to carry out the project with an instrument-leasing strategy. After the announcement of a request for proposal and several vendor briefings, CMG confirmed their laboratory automation architecture and finally completed the contracts. At the same time, a cross-department project team was formed and the laboratory department assigned a section leader to the National Taiwan University Hospital for one month of relevant training. 4.Implementation stage: During the implementation, the project team called for regular meetings to review the results of the operations and to offer an immediate response to the adjustment. The main project tasks included: 1) completion of the preparatory work for beginning the automation procedures; 2) ensuring information security and privacy protection; 3) formulating automated examination process protocols; 4) evaluating the performance of new instruments and the instrument connectivity; 5)ensuring good integration with hospital information systems (HIS)/laboratory information systems (LIS); and 6) ensuring continued compliance with ISO 15189 certification. 5.Confirmation stage: In short, the core process changes include: 1) cancellation of signature seals on the specimen tubes; 2) transfer of daily examination reports to a data warehouse; 3) routine pre-admission blood drawing and formal inpatient morning blood drawing can be incorporated into an automatically-prepared tube mechanism. The study summarizes below the continuous improvement orientations: (1) Flexible reference range set-up for new instruments in LIS. (2) Restructure of the specimen category. (3) Continuous review and improvements to the examination process. (4) Whether installing the tube (specimen) delivery tracks need further evaluation.

Keywords: innovation decision process, total laboratory automation, health care

Procedia PDF Downloads 419
188 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 118
187 Case Report: Peripartum Cardiomyopathy, a Rare but Fatal Condition in Pregnancy and Puerperium

Authors: Sadaf Abbas, HimGauri Sabnis

Abstract:

Introduction: Peripartum cardiomyopathy is a rare but potentially life-threatening condition that presents as heart failure during the last month of pregnancy or within five months postpartum. The incidence of postpartum cardiomyopathy ranges from 1 in 1300 to 1 in 15,000 pregnancies. Risk factors include multiparty, advanced maternal age, multiple pregnancies, pre-eclampsia, and chronic hypertension. Study: A 30-year-old Para3+0 presented to the Emergency Department of St’Marry Hospital, Isle of Wight, on the seventh day postpartum, with acute shortness of breath (SOB), chest pain, cough, and a temperature of 38 degrees. The risk factors were smoking and class II obesity (BMI of 40.62). The patient had mild pre-eclampsia in the last pregnancy and was on labetalol and aspirin during an antenatal period, which was stopped postnatally. There was also a history of pre-eclampsia and haemolysis, elevated liver enzymes, low platelets (HELLP syndrome) in previous pregnancies, which led to preterm delivery at 35 weeks in the second pregnancy, and the first baby was stillborn at 24 weeks. On assessment, there was a national early warning score (NEWS score) of 3, persistent tachycardia, and mild crepitation in the lungs. Initial investigations revealed an enlarged heart on chest X-ray, and a CT pulmonary angiogram indicated bilateral basal pulmonary congestion without pulmonary embolism, suggesting fluid overload. Laboratory results showed elevated CRP and normal troponin levels initially, which later increased, indicating myocardial involvement. Echocardiography revealed a severely dilated left ventricle with an ejection fraction (EF) of 31%, consistent with severely impaired systolic function. The cardiology team reviewed the patient and admitted to the Coronary Care Unit. As sign and symptoms were suggestive of fluid overload and congestive cardiac failure, management was done with diuretics, beta-blockers, angiotensin-converting enzyme inhibitors (ACE inhibitors), proton pump inhibitors, and supportive care. During admission, there was complications such as acute kidney injury, but then recovered well. Chest pain had resolved following the treatment. After being admitted for eight days, there was an improvement in the symptoms, and the patient was discharged home with a further plan of cardiac MRI and genetic testing due to a family history of sudden cardiac death. Regular appointment has been made with the Cardiology team to follow-up on the symptoms. Since discharge, the patient made a good recovery. A cardiac MRI was done, which showed severely impaired left ventricular function, ejection fraction (EF) of 38% with mild left ventricular dilatation, and no evidence of previous infarction. Overall appearance is of non-ischemic dilated cardiomyopathy. The main challenge at the time of admission was the non-availability of a cardiac radiology team, so the definitive diagnosis was delayed. The long-term implications include risk of recurrence, chronic heart failure, and, consequently, an effect on quality of life. Therefore, regular follow-up is critical in patient’s management. Conclusions: Peripartum cardiomyopathy is one of the cardiovascular diseases whose causes are still unknown yet and, in some cases, are uncontrolled. By raising awareness about the symptoms and management of this complication it will reduce morbidity and mortality rates and also the length of stay in the hospital.

Keywords: cardiomyopathy, cardiomegaly, pregnancy, puerperium

Procedia PDF Downloads 29
186 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 302
185 The Impact of Efflux Pump Inhibitor on the Activity of Benzosiloxaboroles and Benzoxadiboroles against Gram-Negative Rods

Authors: Agnieszka E. Laudy, Karolina Stępien, Sergiusz Lulinski, Krzysztof Durka, Stefan Tyski

Abstract:

1,3-dihydro-1-hydroxy-2,1-benzoxaborole and its derivatives are a particularly interesting group of synthetic agents and were successfully employed in supramolecular chemistry medicine. The first important compounds, 5-fluoro-1,3-dihydro-1-hydroxy-2,1-benzoxaborole and 5-chloro-1,3-dihydro-1-hydroxy-2,1-benzoxaborole were identified as potent antifungal agents. In contrast, (S)-3-(aminomethyl)-7-(3-hydroxypropoxy)-1-hydroxy-1,3-dihydro-2,1-benzoxaborole hydrochloride is in the second phase of clinical trials as a drug for the treatment of Gram-negative bacterial infections of the Enterobacteriaceae family and Pseudomonas aeruginosa. Equally important and difficult task is to search for compounds active against Gram-negative bacilli, which have multi-drug-resistance efflux pumps actively removing many of the antibiotics from bacterial cells. We have examined whether halogen-substituted benzoxaborole-based derivatives and their analogues possess antibacterial activity and are substrates for multi-drug-resistance efflux pumps. The antibacterial activity of 1,3-dihydro-3-hydroxy-1,1-dimethyl-1,2,3-benzosiloxaborole and 10 halogen-substituted its derivatives, as well as 1,2-phenylenediboronic acid and 3 synthesised fluoro-substituted its analogs, were evaluated. The activity against the reference strains of Gram-positive (n=5) and Gram-negative bacteria (n=10) was screened by the disc-diffusion test (0.4 mg of tested compounds was applied onto paper disc). The minimal inhibitory concentration values and the minimal bactericidal concentration values were estimated according to The Clinical and Laboratory Standards Institute and The European Committee on Antimicrobial Susceptibility Testing recommendations. During the minimal inhibitory concentration values determination with or without phenylalanine-arginine beta-naphthylamide (50 mg/L) efflux pump inhibitor, the concentrations of tested compounds ranged 0.39-400 mg/L in the broth medium supplemented with 1 mM magnesium sulfate. Generally, the studied benzosiloxaboroles and benzoxadiboroles showed a higher activity against Gram-positive cocci than against Gram-negative rods. Moreover, benzosiloxaboroles have the higher activity than benzoxadiboroles compounds. In this study, we demonstrated that substitution (mono-, di- or tetra-) of 1,3-dihydro-3-hydroxy-1,1-dimethyl-1,2,3-benzosiloxaborole with halogen groups resulted in an increase in antimicrobial activity as compared to the parent substance. Interestingly, the 6,7-dichloro-substituted parent substance was found to be the most potent against Gram-positive cocci: Staphylococcus sp. (minimal inhibitory concentration 6.25 mg/L) and Enterococcus sp. (minimal inhibitory concentration 25 mg/L). On the other hand, mono- and dichloro-substituted compounds were the most actively removed by efflux pumps present in Gram-negative bacteria mainly from Enterobacteriaceae family. In the presence of efflux pump inhibitor the minimal inhibitory concentration values of chloro-substituted benzosiloxaboroles decreased from 400 mg/L to 3.12 mg/L. Of note, the highest increase in bacterial susceptibility to tested compounds in the presence of phenylalanine-arginine beta-naphthylamide was observed for 6-chloro-, 6,7-dichloro- and 6,7-difluoro-substituted benzosiloxaboroles. In the case of Escherichia coli, Enterobacter cloacae and P. aeruginosa strains at least a 32-fold decrease in the minimal inhibitory concentration values of these agents were observed. These data demonstrate structure-activity relationships of the tested derivatives and highlight the need for further search for benzoxaboroles and related compounds with significant antimicrobial properties. Moreover, the influence of phenylalanine-arginine beta-naphthylamide on the susceptibility of Gram-negative rods to studied benzosiloxaboroles indicate that some tested agents are substrates for efflux pumps in Gram-negative rods.

Keywords: antibacterial activity, benzosiloxaboroles, efflux pumps, phenylalanine-arginine beta-naphthylamide

Procedia PDF Downloads 271
184 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 173
183 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 170
182 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 102
181 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 107
180 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 159
179 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 190
178 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 217
177 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 222
176 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
175 DIF-JACKET: a Thermal Protective Jacket for Firefighters

Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves

Abstract:

Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.

Keywords: firefighters, multilayer system, phase change material, thermal protective clothing

Procedia PDF Downloads 163
174 A Literature Review on the Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospitals Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre-EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors which are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality and the data were analysed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system which can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analysed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospitals staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: communication, emergency communication services, emergency medical teams, emergency physicians, emergency nursing, paramedics, information and communication technology, communication systems

Procedia PDF Downloads 86
173 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa

Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu

Abstract:

After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.

Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration

Procedia PDF Downloads 85
172 Artificial Intelligence Impact on the Australian Government Public Sector

Authors: Jessica Ho

Abstract:

AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.

Keywords: artificial inteligence, machine learning, rules, governance, government

Procedia PDF Downloads 70
171 Investigation on Pull-Out-Behavior and Interface Critical Parameters of Polymeric Fibers Embedded in Concrete and Their Correlation with Particular Fiber Characteristics

Authors: Michael Sigruener, Dirk Muscat, Nicole Struebbe

Abstract:

Fiber reinforcement is a state of the art to enhance mechanical properties in plastics. For concrete and civil engineering, steel reinforcements are commonly used. Steel reinforcements show disadvantages in their chemical resistance and weight, whereas polymer fibers' major problems are in fiber-matrix adhesion and mechanical properties. In spite of these facts, longevity and easy handling, as well as chemical resistance motivate researches to develop a polymeric material for fiber reinforced concrete. Adhesion and interfacial mechanism in fiber-polymer-composites are already studied thoroughly. For polymer fibers used as concrete reinforcement, the bonding behavior still requires a deeper investigation. Therefore, several differing polymers (e.g., polypropylene (PP), polyamide 6 (PA6) and polyetheretherketone (PEEK)) were spun into fibers via single screw extrusion and monoaxial stretching. Fibers then were embedded in a concrete matrix, and Single-Fiber-Pull-Out-Tests (SFPT) were conducted to investigate bonding characteristics and microstructural interface of the composite. Differences in maximum pull-out-force, displacement and slope of the linear part of force vs displacement-function, which depicts the adhesion strength and the ductility of the interfacial bond were studied. In SFPT fiber, debonding is an inhomogeneous process, where the combination of interfacial bonding and friction mechanisms add up to a resulting value. Therefore, correlations between polymeric properties and pull-out-mechanisms have to be emphasized. To investigate these correlations, all fibers were introduced to a series of analysis such as differential scanning calorimetry (DSC), contact angle measurement, surface roughness and hardness analysis, tensile testing and scanning electron microscope (SEM). Of each polymer, smooth and abraded fibers were tested, first to simulate the abrasion and damage caused by a concrete mixing process and secondly to estimate the influence of mechanical anchoring of rough surfaces. In general, abraded fibers showed a significant increase in maximum pull-out-force due to better mechanical anchoring. Friction processes therefore play a major role to increase the maximum pull-out-force. The polymer hardness affects the tribological behavior and polymers with high hardness lead to lower surface roughness verified by SEM and surface roughness measurements. This concludes into a decreased maximum pull-out-force for hard polymers. High surface energy polymers show better interfacial bonding strength in general, which coincides with the conducted SFPT investigation. Polymers such as PEEK or PA6 show higher bonding strength in smooth and roughened fibers, revealed through high pull-out-force and concrete particles bonded on the fiber surface pictured via SEM analysis. The surface energy divides into dispersive and polar part, at which the slope is correlating with the polar part. Only polar polymers increase their SFPT-function slope due to better wetting abilities when showing a higher bonding area through rough surfaces. Hence, the maximum force and the bonding strength of an embedded fiber is a function of polarity, hardness, and consequently surface roughness. Other properties such as crystallinity or tensile strength do not affect bonding behavior. Through the conducted analysis, it is now feasible to understand and resolve different effects in pull-out-behavior step-by-step based on the polymer properties itself. This investigation developed a roadmap on how to engineer high adhering polymeric materials for fiber reinforcement of concrete.

Keywords: fiber-matrix interface, polymeric fibers, fiber reinforced concrete, single fiber pull-out test

Procedia PDF Downloads 113
170 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal

Authors: Linta Rose, Prasad K. Bhaskaran

Abstract:

Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.

Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind

Procedia PDF Downloads 220
169 The Aspect of the Digital Formation in the Solar Community as One Prototype to Find the Algorithmic Sustainable Conditions in the Global Environment

Authors: Kunihisa Kakumoto

Abstract:

Purpose: The global environmental problem is now raised in the global dimension. The sprawl phenomenon over the natural limitation is to be made a forecast beforehand in an algorithmic way so that the condition of our social life can hopefully be protected under the natural limitation. The sustainable condition in the globe is now to be found to keep the balance between the capacity of nature and the possibility of our social lives. The amount of water on the earth is limited. Therefore, on the reason, sustainable conditions are strongly dependent on the capacity of water. The amount of water can be considered in relation to the area of the green planting because a certain volume of the water can be obtained in the forest, where the green planting can be preserved. We can find the sustainable conditions of the water in relation to the green planting area. The reduction of CO₂ by green planting is also possible. Possible Measure and the Methods: Until now, by the opportunity of many international conferences, the concept of the solar community as one prototype has been introduced by technical papers. The algorithmic trial calculation on the basic concept of the solar community can be taken into consideration. The concept of the solar community is based on the collected data of the solar model house. According to the algorithmic results of the prototype, the simulation work in the globe can be performed as the algorithmic conversion results. This algorithmic study can be simulated by the amount of water, also in relation to the green planting area. Additionally, the submission of CO₂ in the solar community and the reduction of CO₂ by green planting can be calculated. On the base of these calculations in the solar community, the sustainable conditions on the globe can be simulated as the conversion results in an algorithmic way. The digital formation in the solar community can also be taken into consideration by this opportunity. Conclusion: For the finding of sustainable conditions around the globe, the solar community as one prototype has been taken into consideration. The role of the water is very important because the capacity of the water supply is very limited. But, at present, the cycle of the social community is not composed by the point of the natural mechanism. The simulative calculation of this study can be shown by the limitation of the total water supply. According to this process, the total capacity of the water supply and the capable residential number of the population and the areas can be taken into consideration by the algorithmic calculation. For keeping enough water, the green planting areas are very important. The planting area is also very important to keep the balance of CO₂. The simulative calculation can be performed by the relation between the submission and the reduction of CO₂ in the solar community. For the finding of this total balance and the sustainable conditions, the green planting area and the total amount of water can be recognized by the algorithmic simulative calculation. The study for the finding of sustainable conditions can be performed by the simulative calculations on the algorithmic model in the solar community as one prototype. The example of one prototype can be in balance. The activity of the social life must be in the capacity of the natural mechanism. The capable capacity of the natural environment in our world is very limited.

Keywords: the solar community, the sustainable condition, the natural limitation, the algorithmic calculation

Procedia PDF Downloads 110
168 Research Project of National Interest (PRIN-PNRR) DIVAS: Developing Methods to Assess Tree Vitality after a Wildfire through Analyses of Cambium Sugar Metabolism

Authors: Claudia Cocozza, Niccolò Frassinelli, Enrico Marchi, Cristiano Foderi, Alessandro Bizzarri, Margherita Paladini, Maria Laura Traversi, Eleftherious Touloupakis, Alessio Giovannelli

Abstract:

The development of tools to quickly identify the fate of injured trees after stress is highly relevant when biodiversity restoration of damaged sites is based on nature-based solutions. In this context, an approach to assess irreversible physiological damages within trees could help to support planning management decisions of perturbed sites to restore biodiversity, for the safety of the environment and understanding functionality adjustments of the ecosystems. Tree vitality can be estimated by a series of physiological proxies like cambium activity, starch, and soluble sugars amount in C-sinks whilst the accumulation of ethanol within the cambial cells and phloem is considered an alert of cell death. However, their determination requires time-consuming laboratory protocols, which makes the approach unfeasible as a practical option in the field. The project aims to develop biosensors to assess the concentration of soluble sugars and ethanol in stem tissues. Soluble sugars and ethanol concentrations will be used to define injured trees to discriminate compromised and recovering trees in the forest directly. To reach this goal, we select study sites subjected to prescribed fires or recent wildfires as experimental set-ups. Indeed, in Mediterranean countries, forest fire is a recurrent event that must be considered as a central component of regional and global strategies in forest management and biodiversity restoration programs. A biosensor will be developed through a multistep process related to target analytes characterization, bioreceptor selection, and, finally, calibration/testing of the sensor. To validate biosensor signals, soluble sugars and ethanol will be quantified by HPLC and GC using synthetic media (in lab) and phloem sap (in field) whilst cambium vitality will be assessed by anatomical observations. On burnt trees, the stem growth will be monitored by dendrometers and/or estimated by tree ring analyses, whilst the tree response to past fire events will be assessed by isotopic discrimination. Moreover, the fire characterization and the visual assessment procedure will be used to assign burnt trees to a vitality class. At the end of the project, a well-defined procedure combining biosensor signal and visual assessment will be produced and applied to a study case. The project outcomes and the results obtained will be properly packaged to reach, engage and address the needs of the final users and widely shared with relevant stakeholders involved in the optimal use of biosensors and in the management of post-fire areas. This project was funded by National Recovery and Resilience Plan (NRRP), Mission 4, Component C2, Investment 1.1 - Call for tender No. 1409 of 14 September 2022 – ‘Progetti di Ricerca di Rilevante interesse Nazionale – PRIN’ of Italian Ministry of University and Research funded by the European Union – NextGenerationEU; Grant N° P2022Z5742, CUP B53D23023780001.

Keywords: phloem, scorched crown, conifers, prescribed burning, biosensors

Procedia PDF Downloads 16
167 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 153
166 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis

Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu

Abstract:

Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.

Keywords: GPT, phantom-less QCT, large language model, osteoporosis

Procedia PDF Downloads 71
165 Consumer Preferences for Low-Carbon Futures: A Structural Equation Model Based on the Domestic Hydrogen Acceptance Framework

Authors: Joel A. Gordon, Nazmiye Balta-Ozkan, Seyed Ali Nabavi

Abstract:

Hydrogen-fueled technologies are rapidly advancing as a critical component of the low-carbon energy transition. In countries historically reliant on natural gas for home heating, such as the UK, hydrogen may prove fundamental for decarbonizing the residential sector, alongside other technologies such as heat pumps and district heat networks. While the UK government is set to take a long-term policy decision on the role of domestic hydrogen by 2026, there are considerable uncertainties regarding consumer preferences for ‘hydrogen homes’ (i.e., hydrogen-fueled appliances for space heating, hot water, and cooking. In comparison to other hydrogen energy technologies, such as road transport applications, to date, few studies have engaged with the social acceptance aspects of the domestic hydrogen transition, resulting in a stark knowledge deficit and pronounced risk to policymaking efforts. In response, this study aims to safeguard against undesirable policy measures by revealing the underlying relationships between the factors of domestic hydrogen acceptance and their respective dimensions: attitudinal, socio-political, community, market, and behavioral acceptance. The study employs an online survey (n=~2100) to gauge how different UK householders perceive the proposition of switching from natural gas to hydrogen-fueled appliances. In addition to accounting for housing characteristics (i.e., housing tenure, property type and number of occupants per dwelling) and several other socio-structural variables (e.g. age, gender, and location), the study explores the impacts of consumer heterogeneity on hydrogen acceptance by recruiting respondents from across five distinct groups: (1) fuel poor householders, (2) technology engaged householders, (3) environmentally engaged householders, (4) technology and environmentally engaged householders, and (5) a baseline group (n=~700) which filters out each of the smaller targeted groups (n=~350). This research design reflects the notion that supporting a socially fair and efficient transition to hydrogen will require parallel engagement with potential early adopters and demographic groups impacted by fuel poverty while also accounting strongly for public attitudes towards net zero. Employing a second-order multigroup confirmatory factor analysis (CFA) in Mplus, the proposed hydrogen acceptance model is tested to fit the data through a partial least squares (PLS) approach. In addition to testing differences between and within groups, the findings provide policymakers with critical insights regarding the significance of knowledge and awareness, safety perceptions, perceived community impacts, cost factors, and trust in key actors and stakeholders as potential explanatory factors of hydrogen acceptance. Preliminary results suggest that knowledge and awareness of hydrogen are positively associated with support for domestic hydrogen at the household, community, and national levels. However, with the exception of technology and/or environmentally engaged citizens, much of the population remains unfamiliar with hydrogen and somewhat skeptical of its application in homes. Knowledge and awareness present as critical to facilitating positive safety perceptions, alongside higher levels of trust and more favorable expectations for community benefits, appliance performance, and potential cost savings. Based on these preliminary findings, policymakers should be put on red alert about diffusing hydrogen into the public consciousness in alignment with energy security, fuel poverty, and net-zero agendas.

Keywords: hydrogen homes, social acceptance, consumer heterogeneity, heat decarbonization

Procedia PDF Downloads 114
164 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production

Authors: Mohammad Emamjome Kashan, Alan S. Fung

Abstract:

In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.

Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector

Procedia PDF Downloads 121