Search results for: transmission error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3767

Search results for: transmission error

257 Inhibition of Mild Steel Corrosion in Hydrochloric Acid Medium Using an Aromatic Hydrazide Derivative

Authors: Preethi Kumari P., Shetty Prakasha, Rao Suma A.

Abstract:

Mild steel has been widely employed as construction materials for pipe work in the oil and gas production such as down hole tubular, flow lines and transmission pipelines, in chemical and allied industries for handling acids, alkalis and salt solutions due to its excellent mechanical property and low cost. Acid solutions are widely used for removal of undesirable scale and rust in many industrial processes. Among the commercially available acids hydrochloric acid is widely used for pickling, cleaning, de-scaling and acidization of oil process. Mild steel exhibits poor corrosion resistance in presence of hydrochloric acid. The high reactivity of mild steel in presence of hydrochloric acid is due to the soluble nature of ferrous chloride formed and the cementite phase (Fe3C) normally present in the steel is also readily soluble in hydrochloric acid. Pitting attack is also reported to be a major form of corrosion in mild steel in the presence of high concentrations of acids and thereby causing the complete destruction of metal. Hydrogen from acid reacts with the metal surface and makes it brittle and causes cracks, which leads to pitting type of corrosion. The use of chemical inhibitor to minimize the rate of corrosion has been considered to be the first line of defense against corrosion. In spite of long history of corrosion inhibition, a highly efficient and durable inhibitor that can completely protect mild steel in aggressive environment is yet to be realized. It is clear from the literature review that there is ample scope for the development of new organic inhibitors, which can be conveniently synthesized from relatively cheap raw materials and provide good inhibition efficiency with least risk of environmental pollution. The aim of the present work is to evaluate the electrochemical parameters for the corrosion inhibition behavior of an aromatic hydrazide derivative, 4-hydroxy- N '-[(E)-1H-indole-2-ylmethylidene)] benzohydrazide (HIBH) on mild steel in 2M hydrochloric acid using Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques at 30-60 °C. The results showed that inhibition efficiency increased with increase in inhibitor concentration and decreased marginally with increase in temperature. HIBH showed a maximum inhibition efficiency of 95 % at 8×10-4 M concentration at 30 °C. Polarization curves showed that HIBH act as a mixed-type inhibitor. The adsorption of HIBH on mild steel surface obeys the Langmuir adsorption isotherm. The adsorption process of HIBH at the mild steel/hydrochloric acid solution interface followed mixed adsorption with predominantly physisorption at lower temperature and chemisorption at higher temperature. Thermodynamic parameters for the adsorption process and kinetic parameters for the metal dissolution reaction were determined.

Keywords: electrochemical parameters, EIS, mild steel, tafel polarization

Procedia PDF Downloads 336
256 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture

Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz

Abstract:

Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.

Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV

Procedia PDF Downloads 107
255 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 246
254 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 259
253 Comparison of Gestational Diabetes Influence on the Ultrastructure of Rectus Abdominis Muscle in Women and Rats

Authors: Giovana Vesentini, Fernanda Piculo, Gabriela Marini, Debora Damasceno, Angelica Barbosa, Selma Martheus, Marilza Rudge

Abstract:

Problem statement: Skeletal muscle is highly adaptable, muscle fiber composition and size can respond to a variety of stimuli, such physiologic, as pregnancy, and metabolic abnormalities, as Diabetes mellitus. This study aimed to analyze the effects of pregnancy-associated diabetes on the rectus abdominis muscle (RA), and to compare this changes in rats and women. Methods: Female Wistar rats were maintained under controlled conditions and distributed in Pregnant (P) and Long-term mild pregnant diabetic (LTMd) (n=3 r/group). Diabetes in rats was induced by streptozotocin (100mg/Kg, sc) on the first day of life, for a hyperglycemic state between 120-300 mg/dL in adult life. Female rats were mated overnight, at day 21 of pregnancy were anesthetized, and killed for the harvesting of maternal RA. Pregnant women who attended the Diabetes Prenatal Care Clinic of Botucatu Medical School were distributed in Pregnant non-diabetic (Pnd) and Gestational Diabetic (GDM) (n=3 w/group). The diagnosis of GDM was established according to ADA’s criteria (2016). The harvesting of RA was during the cesarean section. Transversal cross-sections of the RA of both women and rats were analyzed by transmission electron microscopy. All procedures were approved by the Ethics Committee on Animal Experiments of the Botucatu Medical School (Protocol Number 1003/2013) and by the Botucatu Medical School Ethical Committee for Human Research in Medical Sciences (CAAE: 41570815.0.0000.5411). Results: The photomicrographs of the RA of rats revealed disorganized Z lines, thinning sarcomeres, and a usual quantity of intermyofibrillar mitochondria in the P group. The LTMd group showed swollen sarcoplasmic reticulum, dilated T tubes and areas with sarcomere disruption. The ultrastructural analysis of Pnd non-diabetic women in the RA showed well-organized myofibrils forming intact sarcomeres, organized Z lines and a normal distribution of intermyofibrillar mitochondria. The GDM group revealed increase in intermyofibrillar mitochondria, areas with sarcomere disruption and increased lipid droplets. Conclusion: Pregnancy and diabetes induce adaptations in the ultrastructure of the rectus abdominis muscle for both women and rats, changing the architectural design of these tissues. However, in rats these changes are more severe maybe because, besides the high blood glucose levels, the quadrupedal animal may suffer an excessive mechanical tension during pregnancy by gravity. Probably, these findings may suggest that these alterations are a risk factor that contributes to the development of muscle dysfunction in women with GDM and may motivate treatment strategies in these patients.

Keywords: gestational diabetes, muscle dysfunction, pregnancy, rectus abdominis

Procedia PDF Downloads 292
252 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”

Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani

Abstract:

The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.

Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density

Procedia PDF Downloads 124
251 Synthesis and Characterization of pH-Sensitive Graphene Quantum Dot-Loaded Metal-Organic Frameworks for Targeted Drug Delivery and Fluorescent Imaging

Authors: Sayed Maeen Badshah, Kuen-Song Lin, Abrar Hussain, Jamshid Hussain

Abstract:

Liver cancer is a significant global health issue, ranking fifth in incidence and second in mortality. Effective therapeutic strategies are urgently needed to combat this disease, particularly in regions with high prevalence. This study focuses on developing and characterizing fluorescent organometallic frameworks as distinct drug delivery carriers with potential applications in both the treatment and biological imaging of liver cancer. This work introduces two distinct organometallic frameworks: the cake-shaped GQD@NH₂-MIL-125 and the cross-shaped M8U6/FM8U6. The GQD@NH₂-MIL-125 framework is particularly noteworthy for its high fluorescence, making it an effective tool for biological imaging. X-ray diffraction (XRD) analysis revealed specific diffraction peaks at 6.81ᵒ (011), 9.76ᵒ (002), and 11.69ᵒ (121), with an additional significant peak at 26ᵒ (2θ), corresponding to the carbon material. Morphological analysis using Field Emission Scanning Electron Microscopy (FE-SEM), and Transmission Electron Microscopy (TEM) demonstrated that the framework has a front particle size of 680 nm and a side particle size of 55±5 nm. High-resolution TEM (HR-TEM) images confirmed the successful attachment of graphene quantum dots (GQDs) onto the NH2-MIL-125 framework. Fourier-Transform Infrared (FT-IR) spectroscopy identified crucial functional groups within the GQD@NH₂-MIL-125 structure, including O-Ti-O metal bonds within the 500 to 700 cm⁻¹ range, and N-H and C-N bonds at 1,646 cm⁻¹ and 1,164 cm⁻¹, respectively. BET isotherm analysis further revealed a specific surface area of 338.1 m²/g and an average pore size of 46.86 nm. This framework also demonstrated UV-active properties, as identified by UV-visible light spectra, and its photoluminescence (PL) spectra showed an emission peak around 430 nm when excited at 350 nm, indicating its potential as a fluorescent drug delivery carrier. In parallel, the cross-shaped M8U6/FM8U6 frameworks were synthesized and characterized using X-ray diffraction, which identified distinct peaks at 2θ = 7.4 (111), 8.5 (200), 9.2 (002), 10.8 (002), 12.1 (220), 16.7 (103), and 17.1 (400). FE-SEM, HR-TEM, and TEM analyses revealed particle sizes of 350±50 nm for M8U6 and 200±50 nm for FM8U6. These frameworks, synthesized from terephthalic acid (H₂BDC), displayed notable vibrational bonds, such as C=O at 1,650 cm⁻¹, Fe-O in MIL-88 at 520 cm⁻¹, and Zr-O in UIO-66 at 482 cm⁻¹. BET analysis showed specific surface areas of 740.1 m²/g with a pore size of 22.92 nm for M8U6 and 493.9 m²/g with a pore size of 35.44 nm for FM8U6. Extended X-ray Absorption Fine Structure (EXAFS) spectra confirmed the stability of Ti-O bonds in the frameworks, with bond lengths of 2.026 Å for MIL-125, 1.962 Å for NH₂-MIL-125, and 1.817 Å for GQD@NH₂-MIL-125. These findings highlight the potential of these organometallic frameworks for enhanced liver cancer therapy through precise drug delivery and imaging, representing a significant advancement in nanomaterial applications in biomedical science.

Keywords: liver cancer cells, metal organic frameworks, Doxorubicin (DOX), drug release.

Procedia PDF Downloads 8
250 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf

Authors: Abderrazak Bannari, Ghadeer Kadhem

Abstract:

The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.

Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water

Procedia PDF Downloads 381
249 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation

Authors: Joanna Gerszon, Aleksandra Rodacka

Abstract:

In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).

Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation

Procedia PDF Downloads 167
248 Antimicrobial Resistance of Acinetobacter baumannii in Veterinary Settings: A One Health Perspective from Punjab, Pakistan

Authors: Minhas Alam, Muhammad Hidayat Rasool, Mohsin Khurshid, Bilal Aslam

Abstract:

The genus Acinetobacter has emerged as a significant concern in hospital-acquired infections, particularly due to the versatility of Acinetobacter baumannii in causing nosocomial infections. The organism's remarkable metabolic adaptability allows it to thrive in various environments, including the environment, animals, and humans. However, the extent of antimicrobial resistance in Acinetobacter species from veterinary settings, especially in developing countries like Pakistan, remains unclear. This study aimed to isolate and characterize Acinetobacter spp. from veterinary settings in Punjab, Pakistan. A total of 2,230 specimens were collected, including 1,960 samples from veterinary settings (nasal and rectal swabs from dairy and beef cattle), 200 from the environment, and 70 from human clinical settings. Isolates were identified using routine microbiological procedures and confirmed by polymerase chain reaction (PCR). Antimicrobial susceptibility was determined by the disc diffusion method, and minimum inhibitory concentration (MIC) was measured by the micro broth dilution method. Molecular techniques, such as PCR and DNA sequencing, were used to screen for antimicrobial-resistant determinants. Genetic diversity was assessed using standard techniques. The results showed that the overall prevalence of A. baumannii in cattle was 6.63% (65/980). However, among cattle, a higher prevalence of A. baumannii was observed in dairy cattle, 7.38% (54/731), followed by beef cattle, 4.41% (11/249). Out of 65 A. baumannii isolates, the carbapenem resistance was found in 18 strains, i.e. 27.7%. The prevalence of A. baumannii in nasopharyngeal swabs was higher, i.e., 87.7% (57/65), as compared to rectal swabs, 12.3% (8/65). Class D β-lactamases genes blaOXA-23 and blaOXA-51 were present in all the CRAB from cattle. Among carbapenem-resistant isolates, 94.4% (17/18) were positive for class B β-lactamases gene blaIMP, whereas the blaNDM-1 gene was detected in only one isolate of A. baumannii. Among 70 clinical isolates of A. baumannii, 58/70 (82.9%) were positive for the blaOXA-23-like gene, and 87.1% (61/70) were CRAB isolates. Among all clinical isolates of A. baumannii, blaOXA-51-like gene was present. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 82.85% of clinical isolates. From the environmental settings, a total of 18 A. baumannii isolates were recovered; among these, 38.88% (7/18) strains showed carbapenem resistance. All environmental isolates of A. baumannii harbored class D β-lactamases genes, i.e., blaOXA-51 and blaOXA-23 were detected in 38.9% (7/18) isolates. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 38.88% of isolates. From environmental settings, 18 A. baumannii isolates were recovered, with 38.88% showing carbapenem resistance. All environmental isolates harbored blaOXA-51 and blaOXA-23 genes, with co-existence in 38.88% of isolates. MLST results showed ten different sequence types (ST) in clinical isolates, with ST 589 being the most common in carbapenem-resistant isolates. In veterinary isolates, ST2 was most common in CRAB isolates from cattle. Immediate control measures are needed to prevent the transmission of CRAB isolates among animals, the environment, and humans. Further studies are warranted to understand the mechanisms of antibiotic resistance spread and implement effective disease control programs.

Keywords: Acinetobacter baumannii, carbapenemases, drug resistance, MSLT

Procedia PDF Downloads 71
247 In vivo Estimation of Mutation Rate of the Aleutian Mink Disease Virus

Authors: P.P. Rupasinghe, A.H. Farid

Abstract:

The Aleutian mink disease virus (AMDV, Carnivore amdoparvovirus 1) causes persistent infection, plasmacytosis, and formation and deposition of immune complexes in various organs in adult mink, leading to glomerulonephritis, arteritis and sometimes death. The disease has no cure nor an effective vaccine, and identification and culling of mink positive for anti-AMDV antibodies have not been successful in controlling the infection in many countries. The failure to eradicate the virus from infected farms may be caused by keeping false-negative individuals on the farm, virus transmission from wild animals, or neighboring farms. The identification of sources of infection, which can be performed by comparing viral sequences, is important in the success of viral eradication programs. High mutation rates could cause inaccuracies when viral sequences are used to trace back an infection to its origin. There is no published information on the mutation rate of AMDV either in vivo or in vitro. The in vivo estimation is the most accurate method, but it is difficult to perform because of the inherent technical complexities, namely infecting live animals, the unknown numbers of viral generations (i.e., infection cycles), the removal of deleterious mutations over time and genetic drift. The objective of this study was to determine the mutation rate of AMDV on which no information was available. A homogenate was prepared from the spleen of one naturally infected American mink (Neovison vison) from Nova Scotia, Canada (parental template). The near full-length genome of this isolate (91.6%, 4,143 bp) was bidirectionally sequenced. A group of black mink was inoculated with this homogenate (descendant mink). Spleen sampled were collected from 10 descendant mink after 16 weeks post-inoculation (wpi) and from anther 10 mink after 176 wpi, and their near-full length genomes were bi-directionally sequenced. Sequences of these mink were compared with each other and with the sequence of the parental template. The number of nucleotide substitutions at 176 wpi was 3.1 times greater than that at 16 wpi (113 vs 36) whereas the estimates of mutation rate at 176 wpi was 3.1 times lower than that at 176 wpi (2.85×10-3 vs 9.13×10-4 substitutions/ site/ year), showing a decreasing trend in the mutation rate per unit of time. Although there is no report on in vivo estimate of the mutation rate of DNA viruses in animals using the same method which was used in the current study, these estimates are at the higher range of reported values for DNA viruses determined by various techniques. These high estimates are logical based on the wide range of diversity and pathogenicity of AMDV isolates. The results suggest that increases in the number of nucleotide substitutions over time and subsequent divergence make it difficult to accurately trace back AMDV isolates to their origin when several years elapsed between the two samplings.

Keywords: Aleutian mink disease virus, American mink, mutation rate, nucleotide substitution

Procedia PDF Downloads 124
246 Teen Insights into Drugs, Alcohol, and Nicotine: A National Survey of Adolescent Attitudes toward Addictive Substances

Authors: Linda Richter

Abstract:

Background and Significance: The influence of parents on their children’s attitudes and behaviors is immense, even as children grow out of what one might assume to be their most impressionable years and into teenagers. This study specifically examines the potential that parents have to prevent or reduce the risk of adolescent substance use, even in the face of considerable environmental influences to use nicotine, alcohol, or drugs. Methodology: The findings presented are based on a nationally representative survey of 1,014 teens aged 12-17 living in the United States. Data were collected using an online platform in early 2018. About half the sample was female (51%), 49% was aged 12-14, and 51% was aged 15-17. The margin of error was +/- 3.5%. Demographic data on the teens and their families were available through the survey platform. Survey items explored adolescent respondents’ exposure to addictive substances; the extent to which their sources of information about these substances are reliable or credible; friends’ and peers’ substance use; their own intentions to try substances in the future; and their relationship with their parents. Key Findings: Exposure to nicotine, alcohol, or other drugs and misinformation about these substances were associated with a greater likelihood that adolescents have friends who use drugs and that they have intentions to try substances in the future, which are known to directly predict actual teen substance use. In addition, teens who reported a positive relationship with their parents and having parents who are involved in their lives had a lower likelihood of having friends who use drugs and of having intentions to try substances in the future. This relationship appears to be mediated by parents’ ability to reduce the extent to which their children are exposed to substances in their environment and to misinformation about them. Indeed, the findings indicated that teens who reported a good relationship with their parents and those who reported higher levels of parental monitoring had significantly higher odds of reporting a lower number of risk factors than teens with a less positive relationship with parents or less monitoring. There also were significantly greater risk factors associated with substance use among older teens relative to younger teens. This shift appears to coincide directly with the tendency of parents to pull back in their monitoring and their involvement in their adolescent children’s lives. Conclusion: The survey findings underscore the importance of resisting the urge to completely pull back as teens age and demand more independence since that is exactly when the risks for teen substance use spike and young people need their parents and other trusted adults to be involved more than ever. Particularly through the cultivation of a healthy, positive, and open relationship, parents can help teens receive accurate and credible information about substance use and also monitor their whereabouts and exposure to addictive substances. These findings, which come directly from teens themselves, demonstrate the importance of continued parental engagement throughout children’s lives, regardless of their age and the disincentives to remaining involved and connected.

Keywords: adolescent, parental monitoring, prevention, substance use

Procedia PDF Downloads 146
245 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 111
244 Zeolite 4A-confined Ni-Co Nanocluster: An Efficient and Durable Electrocatalyst for Alkaline Methanol Oxidation Reaction

Authors: Sarmistha Baruah, Akshai Kumar, Nageswara Rao Peela

Abstract:

The global energy crisis due to the dependence on fossil fuels and its limited reserves as well as environmental pollution are key concerns to the research communities. However, the implementation of alcohol-based fuel cells such as methanol is anticipated as a reliable source of future energy technology due to their high energy density, environment friendliness, ease of storage, transportation, etc. To drive the anodic methanol oxidation reaction (MOR) in direct methanol fuel cells (DMFCs), an active and long-lasting catalyst is necessary for efficient energy conversion from methanol. Recently, transition metal-zeolite-based materials have been considered versatile catalysts for a variety of industrial and lab-scale processes. Large specific surface area, well-organized micropores, and adjustable acidity/basicity are characteristics of zeolites that make them excellent supports for immobilizing small-sized and highly dispersed metal species. Significant advancement in the production and characterization of well-defined metal clusters encapsulated within zeolite matrix has substantially expanded the library of materials available, and consequently, their catalytic efficacy. In this context, we developed bimetallic Ni-Co catalysts encapsulated within LTA (also known as 4A) zeolite via a method combined with the in-situ encapsulation of metal species using hydrothermal treatment followed by a chemical reduction process. The prepared catalyst was characterized using advanced characterization techniques, such as X-ray diffraction (XRD), field emission transmission electron microscope (FETEM), field emission scanning electron microscope (FESEM), energy dispersive X-ray (EDX), and X-ray photoelectron spectroscopy (XPS). The electrocatalytic activity of the catalyst for MOR was carried out in an alkaline medium at room temperature using techniques such as cyclic voltammetry (CV), and chronoamperometry (CA). The resulting catalyst exhibited better catalytic activity of 12.1 mA cm-2 at 1.12 V vs Ag/AgCl and retained remarkable stability (~77%) even after 1000 cycles CV test for the electro-oxidation of methanol in alkaline media without any significant microstructural changes. The high surface area, better Ni-Co species integration in the zeolite, and the ample amount of surface hydroxyl groups contribute to highly dispersed active sites and quick analyte diffusion, which provide notable MOR kinetics. Thus, this study will open up new possibilities to develop a noble metal-free zeolite-based electrocatalyst due to its simple synthesis steps, large-scale fabrication, improved stability, and efficient activity for DMFC application.

Keywords: alkaline media, bimetallic, encapsulation, methanol oxidation reaction, LTA zeolite.

Procedia PDF Downloads 65
243 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations

Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh

Abstract:

Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.

Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy

Procedia PDF Downloads 96
242 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 185
241 COVID-19 Laws and Policy: The Use of Policy Surveillance For Better Legal Preparedness

Authors: Francesca Nardi, Kashish Aneja, Katherine Ginsbach

Abstract:

The COVID-19 pandemic has demonstrated both a need for evidence-based and rights-based public health policy and how challenging it can be to make effective decisions with limited information, evidence, and data. The O’Neill Institute, in conjunction with several partners, has been working since the beginning of the pandemic to collect, analyze, and distribute critical data on public health policies enacted in response to COVID-19 around the world in the COVID-19 Law Lab. Well-designed laws and policies can help build strong health systems, implement necessary measures to combat viral transmission, enforce actions that promote public health and safety for everyone, and on the individual level have a direct impact on health outcomes. Poorly designed laws and policies, on the other hand, can fail to achieve the intended results and/or obstruct the realization of fundamental human rights, further disease spread, or cause unintended collateral harms. When done properly, laws can provide the foundation that brings clarity to complexity, embrace nuance, and identifies gaps of uncertainty. However, laws can also shape the societal factors that make disease possible. Law is inseparable from the rest of society, and COVID-19 has exposed just how much laws and policies intersects all facets of society. In the COVID-19 context, evidence-based and well-informed law and policy decisions—made at the right time and in the right place—can and have meant the difference between life or death for many. Having a solid evidentiary base of legal information can promote the understanding of what works well and where, and it can drive resources and action to where they are needed most. We know that legal mechanisms can enable nations to reduce inequities and prepare for emerging threats, like novel pathogens that result in deadly disease outbreaks or antibiotic resistance. The collection and analysis of data on these legal mechanisms is a critical step towards ensuring that legal interventions and legal landscapes are effectively incorporated into more traditional kinds of health science data analyses. The COVID-19 Law Labs see a unique opportunity to collect and analyze this kind of non-traditional data to inform policy using laws and policies from across the globe and across diseases. This global view is critical to assessing the efficacy of policies in a wide range of cultural, economic, and demographic circumstances. The COVID-19 Law Lab is not just a collection of legal texts relating to COVID-19; it is a dataset of concise and actionable legal information that can be used by health researchers, social scientists, academics, human rights advocates, law and policymakers, government decision-makers, and others for cross-disciplinary quantitative and qualitative analysis to identify best practices from this outbreak, and previous ones, to be better prepared for potential future public health events.

Keywords: public health law, surveillance, policy, legal, data

Procedia PDF Downloads 141
240 The Sustained Utility of Japan's Human Security Policy

Authors: Maria Thaemar Tana

Abstract:

The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.

Keywords: human security, foreign policy, neoclassical realism, peace-building

Procedia PDF Downloads 133
239 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 139
238 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 75
237 Composition and Catalytic Behaviour of Biogenic Iron Containing Materials Obtained by Leptothrix Bacteria Cultivation in Different Growth Media

Authors: M. Shopska, D. Paneva, G. Kadinov, Z. Cherkezova-Zheleva, I. Mitov

Abstract:

The iron containing materials are used as catalysts in different processes. The chemical methods of their synthesis use toxic and expensive chemicals; sophisticated devices; energy consumption processes that raise their cost. Besides, dangerous waste products are formed. At present time such syntheses are out of date and wasteless technologies are indispensable. The bioinspired technologies are consistent with the ecological requirements. Different microorganisms participate in the biomineralization of the iron and some phytochemicals are involved, too. The methods for biogenic production of iron containing materials are clean, simple, nontoxic, realized at ambient temperature and pressure, cheaper. The biogenic iron materials embrace different iron compounds. Due to their origin these substances are nanosized, amorphous or poorly crystalline, porous and have number of useful properties like SPM, high magnetism, low toxicity, biocompatibility, absorption of microwaves, high surface area/volume ratio, active sites on the surface with unusual coordination that distinguish them from the bulk materials. The biogenic iron materials are applied in the heterogeneous catalysis in different roles - precursor, active component, support, immobilizer. The application of biogenic iron oxide materials gives rise to increased catalytic activity in comparison with those of abiotic origin. In our study we investigated the catalytic behavior of biomasses obtained by cultivation of Leptothrix bacteria in three nutrition media – Adler, Fedorov, and Lieske. The biomass composition was studied by Moessbauer spectroscopy and transmission IRS. Catalytic experiments on CO oxidation were carried out using in situ DRIFTS. Our results showed that: i) the used biomasses contain α-FeOOH, γ-FeOOH, γ-Fe2O3 in different ratios; ii) the biomass formed in Adler medium contains γ-FeOOH as main phase. The CO conversion was about 50% as evaluated by decreased integrated band intensity in the gas mixture spectra during the reaction. The main phase in the spent sample is γ-Fe2O3; iii) the biomass formed in Lieske medium contains α-FeOOH. The CO conversion was about 20%. The main phase in the spent sample is α-Fe2O3; iv) the biomass formed in Fedorov medium contains γ-Fe2O3 as main phase. CO conversion in the test reaction was about 19%. The results showed that the catalytic activity up to 200°C resulted predominantly from α-FeOOH and γ-FeOOH. The catalytic activity at temperatures higher than 200°C was due to the formation of γ-Fe2O3. The oxyhydroxides, which are the principal compounds in the biomass, have low catalytic activity in the used reaction; the maghemite has relatively good catalytic activity; the hematite has activity commensurate with that of the oxyhydroxides. Moreover it can be affirmed that catalytic activity is inherent in maghemite, which is obtained by transformation of the biogenic lepidocrocite, i.e. it has biogenic precursor.

Keywords: nanosized biogenic iron compounds, catalytic behavior in reaction of CO oxidation, in situ DRIFTS, Moessbauer spectroscopy

Procedia PDF Downloads 369
236 Optimization of Cobalt Oxide Conversion to Co-Based Metal-Organic Frameworks

Authors: Aleksander Ejsmont, Stefan Wuttke, Joanna Goscianska

Abstract:

Gaining control over particle shape, size and crystallinity is an ongoing challenge for many materials. Especially metalorganic frameworks (MOFs) are recently widely studied. Besides their remarkable porosity and interesting topologies, morphology has proven to be a significant feature. It can affect the further material application. Thus seeking new approaches that enable MOF morphology modulation is important. MOFs are reticular structures, where building blocks are made up of organic linkers and metallic nodes. The most common strategy of ensuring metal source is using salts, which usually exhibit high solubility and hinder morphology control. However, there has been a growing interest in using metal oxides as structure-directing agents towards MOFs due to their very low solubility and shape preservation. Metal oxides can be treated as a metal reservoir during MOF synthesis. Up to now, reports in which receiving MOFs from metal oxides mostly present ZnO conversion to ZIF-8. However, there are other oxides, for instance, Co₃O₄, which often is overlooked due to their structural stability and insolubility in aqueous solutions. Cobalt-based materials are famed for catalytic activity. Therefore the development of their efficient synthesis is worth attention. In the presented work, an optimized Co₃O₄transition to Co-MOFviaa solvothermal approach was proposed. The starting point of the research was the synthesis of Co₃O₄ flower petals and needles under hydrothermal conditions using different cobalt salts (e.g., cobalt(II) chloride and cobalt(II) nitrate), in the presence of urea, and hexadecyltrimethylammonium bromide (CTAB) surfactant as a capping agent. After receiving cobalt hydroxide, the calcination process was performed at various temperatures (300–500 °C). Then cobalt oxides as a source of cobalt cations were subjected to reaction with trimesic acid in solvothermal environment and temperature of 120 °C leading to Co-MOF fabrication. The solution maintained in the system was a mixture of water, dimethylformamide, and ethanol, with the addition of strong acids (HF and HNO₃). To establish how solvents affect metal oxide conversion, several different solvent ratios were also applied. The materials received were characterized with analytical techniques, including X-ray powder diffraction, energy dispersive spectroscopy,low-temperature nitrogen adsorption/desorption, scanning, and transmission electron microscopy. It was confirmed that the synthetic routes have led to the formation of Co₃O₄ and Co-based MOF varied in shape and size of particles. The diffractograms showed receiving crystalline phase for Co₃O₄, and also for Co-MOF. The Co₃O₄ obtained from nitrates and with using low-temperature calcination resulted in smaller particles. The study indicated that cobalt oxide particles of different size influence the efficiency of conversion and morphology of Co-MOF. The highest conversion was achieved using metal oxides with small crystallites.

Keywords: Co-MOF, solvothermal synthesis, morphology control, core-shell

Procedia PDF Downloads 162
235 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 137
234 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin

Authors: Julio Jesus Salazar, Julio Jesus De Lama

Abstract:

the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.

Keywords: hydrology, internet of things, machine learning, river basin

Procedia PDF Downloads 160
233 Plant Microbiota of Coastal Halophyte Salicornia Ramossisima

Authors: Isabel N. Sierra-Garcia, Maria J. Ferreira, Sandro Figuereido, Newton Gomes, Helena Silva, Angela Cunha

Abstract:

Plant-associated microbial communities are considered crucial in the adaptation of halophytes to coastal environments. The plant microbiota can be horizontally acquired from the environment or vertically transmitted from generation to generation via seeds. Recruiting of the microbial communities by the plant is affected by geographical location, soil source, host genotype, and cultivation practice. There is limited knowledge reported on the microbial communities in halophytes the influence of biotic and abiotic factors. In this work, the microbiota associated with the halophyte Salicornia ramosissima was investigated to determine whether the structure of bacterial communities is influenced by host genotype or soil source. For this purpose, two contrasting sites where S. ramosissima is established in the estuarine system of the Ria de Aveiro were investigated. One site corresponds to a natural salt marsh where S. ramosissima plants are present (wild plants), and the other site is a former salt pan that nowadays are subjected to intensive crop production of S. ramosissima (crop plants). Bacterial communities from the rhizosphere, seeds and root endosphere of S. ramossisima from both sites were investigated by sequencing bacterial 16S rRNA gene using the Illumina MiSeq platform. The analysis of the sequences showed that the three plant-associated compartments, rhizosphere, root endosphere, and seed endosphere, harbor distinct microbiomes. However, bacterial richness and diversity were higher in seeds of wild plants, followed by rhizosphere in both sites, while seeds in the crop site had the lowest diversity. Beta diversity measures indicated that bacterial communities in root endosphere and seeds were more similar in both wild and crop plants in contrast to rhizospheres that differed by local, indicating that the recruitment of the similar bacterial communities by the plant genotype is active in regard to the site. Moreover, bacterial communities from the root endosphere and rhizosphere were phylogenetically more similar in both sites, but the phylogenetic composition of seeds in wild and crop sites was distinct. These results indicate that cultivation practices affect the seed microbiome. However, minimal vertical transmission of bacteria from seeds to adult plants is expected. Seeds from the crop site showed higher abundances of Kushneria and Zunongwangia genera. Bacterial members of the classes Alphaprotebacteria and Bacteroidia were the most ubiquitous across sites and compartments and might encompass members of the core microbiome. These findings indicate that bacterial communities associated with S. ramosissima are more influenced by host genotype rather than local abiotic factors or cultivation practices. This study provides a better understanding of the composition of the plant microbiota in S. ramosissima , which is essential to predict the interactions between plant and associated microbial communities and their effects on plant health. This knowledge is useful to the manipulations of these microbial communities to enhance the health and productivity of this commercially important plant.

Keywords: halophytes, plant microbiome, Salicornia ramosissima, agriculture

Procedia PDF Downloads 169
232 Interface Fracture of Sandwich Composite Influenced by Multiwalled Carbon Nanotube

Authors: Alak Kumar Patra, Nilanjan Mitra

Abstract:

Higher strength to weight ratio is the main advantage of sandwich composite structures. Interfacial delamination between the face sheet and core is a major problem in these structures. Many research works are devoted to improve the interfacial fracture toughness of composites majorities of which are on nano and laminated composites. Work on influence of multiwalled carbon nano-tubes (MWCNT) dispersed resin system on interface fracture of glass-epoxy PVC core sandwich composite is extremely limited. Finite element study is followed by experimental investigation on interface fracture toughness of glass-epoxy (G/E) PVC core sandwich composite with and without MWCNT. Results demonstrate an improvement in interface fracture toughness values (Gc) of samples with a certain percentages of MWCNT. In addition, dispersion of MWCNT in epoxy resin through sonication followed by mixing of hardener and vacuum resin infusion (VRI) technology used in this study is an easy and cost effective methodology in comparison to previously adopted other methods limited to laminated composites. The study also identifies the optimum weight percentage of MWCNT addition in the resin system for maximum performance gain in interfacial fracture toughness. The results agree with finite element study, high-resolution transmission electron microscope (HRTEM) analysis and fracture micrograph of field emission scanning electron microscope (FESEM) investigation. Interface fracture toughness (GC) of the DCB sandwich samples is calculated using the compliance calibration (CC) method considering the modification due to shear. Compliance (C) vs. crack length (a) data of modified sandwich DCB specimen is fitted to a power function of crack length. The calculated mean value of the exponent n from the plots of experimental results is 2.22 and is different from the value (n=3) prescribed in ASTM D5528-01for mode 1 fracture toughness of laminate composites (which is the basis for modified compliance calibration method). Differentiating C with respect to crack length (a) and substituting it in the expression GC provides its value. The research demonstrates improvement of 14.4% in peak load carrying capacity and 34.34% in interface fracture toughness GC for samples with 1.5 wt% MWCNT (weight % being taken with respect to weight of resin) in comparison to samples without MWCNT. The paper focuses on significant improvement in experimentally determined interface fracture toughness of sandwich samples with MWCNT over the samples without MWCNT using much simpler method of sonication. Good dispersion of MWCNT was observed in HRTEM with 1.5 wt% MWCNT addition in comparison to other percentages of MWCNT. FESEM studies have also demonstrated good dispersion and fiber bridging of MWCNT in resin system. Ductility is also observed to be higher for samples with MWCNT in comparison to samples without.

Keywords: carbon nanotube, epoxy resin, foam, glass fibers, interfacial fracture, sandwich composite

Procedia PDF Downloads 303
231 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 280
230 Combustion Characteristics and Pollutant Emissions in Gasoline/Ethanol Mixed Fuels

Authors: Shin Woo Kim, Eui Ju Lee

Abstract:

The recent development of biofuel production technology facilitates the use of bioethanol and biodiesel on automobile. Bioethanol, especially, can be used as a fuel for gasoline vehicles because the addition of ethanol has been known to increase octane number and reduce soot emissions. However, the wide application of biofuel has been still limited because of lack of detailed combustion properties such as auto-ignition temperature and pollutant emissions such as NOx and soot, which has been concerned mainly on the vehicle fire safety and environmental safety. In this study, the combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally. For auto-ignition temperature and NOx emission, the numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. Also, the response surface method (RSM) was introduced as a design of experiment (DOE), which enables the various combustion properties to be predicted and optimized systematically with respect to three independent variables, i.e., ethanol mole fraction, equivalence ratio and residence time. The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence needs to adjust combustion itself rather than an after-treatment system. RSM results analyzed with three independent variables predict the auto-ignition temperature accurately. However, NOx emission had a big difference between the calculated values and the predicted values using conventional RSM because NOx emission varies very steeply and hence the obtained second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, NOx emission is taken as common logarithms and worked again with RSM. NOx emission predicted through logarithm transformation is in a fairly good agreement with the experimental results. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires, which is widely used as a fire source of laboratory scale experiments. Three measurement methods were introduced to clarify the pollutant emissions, i.e., various gas concentrations including NOx, gravimetric soot filter sampling for elements analysis and pyrolysis, thermophoretic soot sampling with transmission electron microscopy (TEM). Soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. The morphology of the soot particle was investigated to address the degree of soot maturing. The incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel.

Keywords: gasoline/ethanol fuel, NOx, pool fire, soot, well-stirred reactor (WSR)

Procedia PDF Downloads 212
229 Difficulties for Implementation of Telenursing: An Experience Report

Authors: Jacqueline A. G. Sachett, Cláudia S. Nogueira, Diana C. P. Lima, Jessica T. S. Oliveira, Guilherme K. M. Salazar, Lílian K. Aguiar

Abstract:

The Polo Amazon Telehealth offers several tools for professionals working in Primary Health Care as a second formative opinion, teleconsulting and training between the different areas, whether medicine, dentistry, nursing, physiotherapy, among others. These activities have a monthly schedule of free access to the municipalities of Amazonas registered. With this premise, and in partnership with the University of the State of Amazonas (UEA), is promoting the practice of the triad; teaching-research-extension in order to collaborate with the enrichment and acquisition of knowledge through educational practices carried out through teleconferences. Therefore, nursing is to join efforts and inserts as a collaborator of this project running, contributing to the education and training of these professionals who are part of the health system in full Amazon. The aim of this study is to report the experience of academic of Amazonas State University nursing course, about the experience in the extension project underway in Polo Telemedicine Amazon. This was a descriptive study, the experience report type, about the experience of nursing academic UEA, by extension 'Telenursing: teleconsulting and second formative opinion for FHS professionals in the state of Amazonas' project, held in Polo Telemedicine Amazon, through an agreement with the UEA and funded by the Foundation of Amazonas Research from July / 2012 to July / 2016. Initially developed active search of members of the Family Health Strategy professionals, in order to provide training and training teams to use the virtual clinic, as well as the virtual environment is the focus of this tool design. The election period was an aggravating factor for the implementation of teleconsulting proposal, due to change of managers in each municipality, requiring the stoppage until they assume their positions. From this definition, we established the need for new training. The first video conference took place on 03.14.2013 for learning and training in the use of Virtual Learning Environment and Virtual Clinic, with the participation of municipalities of Novo Aripuanã, São Paulo de Olivença and Manacapuru. During the whole project was carried out literature about what is being done and produced at the national level about the subject. By the time the telenursing project has received twenty-five (25) consultancy requests. The consultants sent by nursing professionals, all have been answered to date. Faced with the lived experience, particularly in video conferencing, face to cause difficulties issues, such as the fluctuation in the number of participants in activities, difficulty of participants to reconcile the opening hours of the units with the schedule of video conferencing, transmission difficulties and changes schedule. It was concluded that the establishment of connection between the Telehealth points is one of the main factors for the implementation of Telenursing and that this feature is still new for nursing. However, effective training and updating, may provide to these professional category subsidies to quality health care in the Amazon.

Keywords: Amazon, teleconsulting, telehealth, telenursing

Procedia PDF Downloads 310
228 3D Interactions in Under Water Acoustic Simulations

Authors: Prabu Duplex

Abstract:

Due to stringent emission regulation targets, large-scale transition to renewable energy sources is a global challenge, and wind power plays a significant role in the solution vector. This scenario has led to the construction of offshore wind farms, and several wind farms are planned in the shallow waters where the marine habitat exists. It raises concerns over impacts of underwater noise on marine species, for example bridge constructions in the ocean straits. Dangerous to aquatic life, the environmental organisations say, the bridge would be devastating, since ocean straits are important place of transit for marine mammals. One of the highest concentrations of biodiversity in the world is concentrated these areas. The investigation of ship noise and piling noise that may happen during bridge construction and in operation is therefore vital. Once the source levels are known the receiver levels can be modelled. With this objective this work investigates the key requirement of the software that can model transmission loss in high frequencies that may occur during construction or operation phases. Most propagation models are 2D solutions, calculating the propagation loss along a transect, which does not include horizontal refraction, reflection or diffraction. In many cases, such models provide sufficient accuracy and can provide three-dimensional maps by combining, through interpolation, several two-dimensional (distance and depth) transects. However, in some instances the use of 2D models may not be sufficient to accurately model the sound propagation. A possible example includes a scenario where an island or land mass is situated between the source and receiver. The 2D model will result in a shadow behind the land mass where the modelled transects intersect the land mass. Diffraction will occur causing bending of the sound around the land mass. In such cases, it may be necessary to use a 3D model, which accounts for horizontal diffraction to accurately represent the sound field. Other scenarios where 2D models may not provide sufficient accuracy may be environments characterised by a strong up-sloping or down sloping seabed, such as propagation around continental shelves. In line with these objectives by means of a case study, this work addresses the importance of 3D interactions in underwater acoustics. The methodology used in this study can also be used for other 3D underwater sound propagation studies. This work assumes special significance given the increasing interest in using underwater acoustic modeling for environmental impacts assessments. Future work also includes inter-model comparison in shallow water environments considering more physical processes known to influence sound propagation, such as scattering from the sea surface. Passive acoustic monitoring of the underwater soundscape with distributed hydrophone arrays is also suggested to investigate the 3D propagation effects as discussed in this article.

Keywords: underwater acoustics, naval, maritime, cetaceans

Procedia PDF Downloads 19