Search results for: generalised gradient approximation
196 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction
Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez
Abstract:
Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis
Procedia PDF Downloads 181195 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon
Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi
Abstract:
The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range to MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation
Procedia PDF Downloads 162194 DNA Nano Wires: A Charge Transfer Approach
Authors: S. Behnia, S. Fathizadeh, A. Akhshani
Abstract:
In the recent decades, DNA has increasingly interested in the potential technological applications that not directly related to the coding for functional proteins that is the expressed in form of genetic information. One of the most interesting applications of DNA is related to the construction of nanostructures of high complexity, design of functional nanostructures in nanoelectronical devices, nanosensors and nanocercuits. In this field, DNA is of fundamental interest to the development of DNA-based molecular technologies, as it possesses ideal structural and molecular recognition properties for use in self-assembling nanodevices with a definite molecular architecture. Also, the robust, one-dimensional flexible structure of DNA can be used to design electronic devices, serving as a wire, transistor switch, or rectifier depending on its electronic properties. In order to understand the mechanism of the charge transport along DNA sequences, numerous studies have been carried out. In this regard, conductivity properties of DNA molecule could be investigated in a simple, but chemically specific approach that is intimately related to the Su-Schrieffer-Heeger (SSH) model. In SSH model, the non-diagonal matrix element dependence on intersite displacements is considered. In this approach, the coupling between the charge and lattice deformation is along the helix. This model is a tight-binding linear nanoscale chain established to describe conductivity phenomena in doped polyethylene. It is based on the assumption of a classical harmonic interaction between sites, which is linearly coupled to a tight-binding Hamiltonian. In this work, the Hamiltonian and corresponding motion equations are nonlinear and have high sensitivity to initial conditions. Then, we have tried to move toward the nonlinear dynamics and phase space analysis. Nonlinear dynamics and chaos theory, regardless of any approximation, could open new horizons to understand the conductivity mechanism in DNA. For a detailed study, we have tried to study the current flowing in DNA and investigated the characteristic I-V diagram. As a result, It is shown that there are the (quasi-) ohmic areas in I-V diagram. On the other hand, the regions with a negative differential resistance (NDR) are detectable in diagram.Keywords: DNA conductivity, Landauer resistance, negative dierential resistance, Chaos theory, mean Lyapunov exponent
Procedia PDF Downloads 425193 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 323192 Production of Pre-Reduction of Iron Ore Nuggets with Lesser Sulphur Intake by Devolatisation of Boiler Grade Coal
Authors: Chanchal Biswas, Anrin Bhattacharyya, Gopes Chandra Das, Mahua Ghosh Chaudhuri, Rajib Dey
Abstract:
Boiler coals with low fixed carbon and higher ash content have always challenged the metallurgists to develop a suitable method for their utilization. In the present study, an attempt is made to establish an energy effective method for the reduction of iron ore fines in the form of nuggets by using ‘Syngas’. By devolatisation (expulsion of volatile matter by applying heat) of boiler coal, gaseous product (enriched with reducing agents like CO, CO2, H2, and CH4 gases) is generated. Iron ore nuggets are reduced by this syngas. For that reason, there is no direct contact between iron ore nuggets and coal ash. It helps to control the minimization of the sulphur intake of the reduced nuggets. A laboratory scale devolatisation furnace designed with reduction facility is evaluated after in-depth studies and exhaustive experimentations including thermo-gravimetric (TG-DTA) analysis to find out the volatile fraction present in boiler grade coal, gas chromatography (GC) to find out syngas composition in different temperature and furnace temperature gradient measurements to minimize the furnace cost by applying one heating coil. The nuggets are reduced in the devolatisation furnace at three different temperatures and three different times. The pre-reduced nuggets are subjected to analytical weight loss calculations to evaluate the extent of reduction. The phase and surface morphology analysis of pre-reduced samples are characterized using X-ray diffractometry (XRD), energy dispersive x-ray spectrometry (EDX), scanning electron microscopy (SEM), carbon sulphur analyzer and chemical analysis method. Degree of metallization of the reduced nuggets is 78.9% by using boiler grade coal. The pre-reduced nuggets with lesser sulphur content could be used in the blast furnace as raw materials or coolant which would reduce the high quality of coke rate of the furnace due to its pre-reduced character. These can be used in Basic Oxygen Furnace (BOF) as coolant also.Keywords: alternative ironmaking, coal gasification, extent of reduction, nugget making, syngas based DRI, solid state reduction
Procedia PDF Downloads 259191 Therapeutic Drug Monitoring by Dried Blood Spot and LC-MS/MS: Novel Application to Carbamazepine and Its Metabolite in Paediatric Population
Authors: Giancarlo La Marca, Engy Shokry, Fabio Villanelli
Abstract:
Epilepsy is one of the most common neurological disorders, with an estimated prevalence of 50 million people worldwide. Twenty five percent of the epilepsy population is represented in children under the age of 15 years. For antiepileptic drugs (AED), there is a poor correlation between plasma concentration and dose especially in children. This was attributed to greater pharmacokinetic variability than adults. Hence, therapeutic drug monitoring (TDM) is recommended in controlling toxicity while drug exposure is maintained. Carbamazepine (CBZ) is a first-line AED and the drug of first choice in trigeminal neuralgia. CBZ is metabolised in the liver into carbamazepine-10,11-epoxide (CBZE), its major metabolite which is equipotent. This develops the need for an assay able to monitor the levels of both CBZ and CBZE. The aim of the present study was to develop and validate a LC-MS/MS method for simultaneous quantification of CBZ and CBZE in dried blood spots (DBS). DBS technique overcomes many logistical problems, ethical issues and technical challenges faced by classical plasma sampling. LC-MS/MS has been regarded as superior technique over immunoassays and HPLC/UV methods owing to its better specificity and sensitivity, lack of interference or matrix effects. Our method combines advantages of DBS technique and LC-MS/MS in clinical practice. The extraction process was done using methanol-water-formic acid (80:20:0.1, v/v/v). The chromatographic elution was achieved by using a linear gradient with a mobile phase consisting of acetonitrile-water-0.1% formic acid at a flow rate of 0.50 mL/min. The method was linear over the range 1-40 mg/L and 0.25-20 mg/L for CBZ and CBZE respectively. The limit of quantification was 1.00 mg/L and 0.25 mg/L for CBZ and CBZE, respectively. Intra-day and inter-day assay precisions were found to be less than 6.5% and 11.8%. An evaluation of DBS technique was performed, including effect of extraction solvent, spot homogeneity and stability in DBS. Results from a comparison with the plasma assay are also presented. The novelty of the present work lies in being the first to quantify CBZ and its metabolite from only one 3.2 mm DBS disc finger-prick sample (3.3-3.4 µl blood) by LC-MS/MS in a 10 min. chromatographic run.Keywords: carbamazepine, carbamazepine-10, 11-epoxide, dried blood spots, LC-MS/MS, therapeutic drug monitoring
Procedia PDF Downloads 417190 Thermodynamic Analysis of Surface Seawater under Ocean Warming: An Integrated Approach Combining Experimental Measurements, Theoretical Modeling, Machine Learning Techniques, and Molecular Dynamics Simulation for Climate Change Assessment
Authors: Nishaben Desai Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
Understanding ocean thermodynamics has become increasingly critical as Earth's oceans serve as the primary planetary heat regulator, absorbing approximately 93% of excess heat energy from anthropogenic greenhouse gas emissions. This investigation presents a comprehensive analysis of Arabian Sea surface seawater thermodynamics, focusing specifically on heat capacity (Cp) and thermal expansion coefficient (α) - parameters fundamental to global heat distribution patterns. Through high-precision experimental measurements of ultrasonic velocity and density across varying temperature (293.15-318.15K) and salinity (0.5-35 ppt) conditions, it characterize critical thermophysical parameters including specific heat capacity, thermal expansion, and isobaric and isothermal compressibility coefficients in natural seawater systems. The study employs advanced machine learning frameworks - Random Forest, Gradient Booster, Stacked Ensemble Machine Learning (SEML), and AdaBoost - with SEML achieving exceptional accuracy (R² > 0.99) in heat capacity predictions. the findings reveal significant temperature-dependent molecular restructuring: enhanced thermal energy disrupts hydrogen-bonded networks and ion-water interactions, manifesting as decreased heat capacity with increasing temperature (negative ∂Cp/∂T). This mechanism creates a positive feedback loop where reduced heat absorption capacity potentially accelerates oceanic warming cycles. These quantitative insights into seawater thermodynamics provide crucial parametric inputs for climate models and evidence-based environmental policy formulation, particularly addressing the critical knowledge gap in thermal expansion behavior of seawater under varying temperature-salinity conditions.Keywords: climate change, arabian sea, thermodynamics, machine learning
Procedia PDF Downloads 3189 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media
Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia
Abstract:
Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.Keywords: CFD, reactor, ultrasound, wastewater
Procedia PDF Downloads 190188 Reconnaissance Investigation of Thermal Springs in the Middle Benue Trough, Nigeria by Remote Sensing
Authors: N. Tochukwu, M. Mukhopadhyay, A. Mohamed
Abstract:
It is no new that Nigeria faces a continual power shortage problem due to its vast population power demand and heavy reliance on nonrenewable forms of energy such as thermal power or fossil fuel. Many researchers have recommended using geothermal energy as an alternative; however, Past studies focus on the geophysical & geochemical investigation of this energy in the sedimentary and basement complex; only a few studies incorporated the remote sensing methods. Therefore, in this study, the preliminary examination of geothermal resources in the Middle Benue was carried out using satellite imagery in ArcMap. Landsat 8 scene (TIR, NIR, Red spectral bands) was used to estimate the Land Surface Temperature (LST). The Maximum Likelihood Classification (MLC) technique was used to classify sites with very low, low, moderate, and high LST. The intermediate and high classification happens to be possible geothermal zones, and they occupy 49% of the study area (38077km2). Riverline were superimposed on the LST layer, and the identification tool was used to locate high temperate sites. Streams that overlap on the selected sites were regarded as geothermal springs as. Surprisingly, the LST results show lower temperatures (<36°C) at the famous thermal springs (Awe & Wukari) than some unknown rivers/streams found in Kwande (38°C), Ussa, (38°C), Gwer East (37°C), Yola Cross & Ogoja (36°C). Studies have revealed that temperature increases with depth. However, this result shows excellent geothermal resources potential as it is expected to exceed the minimum geothermal gradient of 25.47 with an increase in depth. Therefore, further investigation is required to estimate the depth of the causative body, geothermal gradients, and the sustainability of the reservoirs by geophysical and field exploration. This method has proven to be cost-effective in locating geothermal resources in the study area. Consequently, the same procedure is recommended to be applied in other regions of the Precambrian basement complex and the sedimentary basins in Nigeria to save a preliminary field survey cost.Keywords: ArcMap, geothermal resources, Landsat 8, LST, thermal springs, MLC
Procedia PDF Downloads 188187 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 218186 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator
Authors: Ahmed Abdulrahman, Jalal Foroozesh
Abstract:
CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation
Procedia PDF Downloads 176185 Amplitude Versus Offset (AVO) Modeling as a Tool for Seismic Reservoir Characterization of the Semliki Basin
Authors: Hillary Mwongyera
Abstract:
The Semliki basin has become a frontier for petroleum exploration in recent years. Exploration efforts have resulted into extensive seismic data acquisition and drilling of three wells namely; Turaco 1, Turaco 2 and Turaco 3. A petrophysical analysis of the Turaco 1 well was carried out to identify two reservoir zones on which AVO modeling was performed. A combination of seismic modeling and rock physics modeling was applied during reservoir characterization and monitoring to determine variations of seismic responses with amplitude characteristics. AVO intercept gradient analysis applied on AVO synthetic CDP gathers classified AVO anomalies associated with both reservoir zones as Class 1 AVO anomalies. Fluid replacement modeling was carried out on both reservoir zones using homogeneous mixing and patchy saturation patterns to determine effects of fluid substitution on rock property interactions. For both homogeneous mixing and saturation patterns, density (ρ) showed an increasing trend with increasing brine substitution while Shear wave velocity (Vs) decreased with increasing brine substitution. A study of compressional wave velocity (Vp) with increasing brine substitution for both homogeneous mixing and patchy saturation gave quite interesting results. During patchy saturation, Vp increased with increasing brine substitution. During homogeneous mixing however, Vp showed a slightly decreasing trend with increasing brine substitution but increased tremendously towards and at full brine saturation. A sensitivity analysis carried out showed that density was a very sensitive rock property responding to brine saturation except at full brine saturation during homogeneous mixing where Vp showed greater sensitivity with brine saturation. Rock physics modeling was performed to predict diagnostics of reservoir quality using an inverse deterministic approach which showed low shale content and a high degree of shale stiffness within reservoir zones.Keywords: Amplitude Versus Offset (AVO), fluid replacement modelling, reservoir characterization, AVO attributes, rock physics modelling, reservoir monitoring
Procedia PDF Downloads 531184 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach
Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier
Abstract:
²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.Keywords: colloids, mining context, radium, transport
Procedia PDF Downloads 156183 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 284182 Application of Shore Protective Structures in Optimum Land Using of Defense Sites Located in Coastal Cities
Authors: Mir Ahmad Lashteh Neshaei, Hamed Afsoos Biria, Ata Ghabraei, Mir Abdolhamid Mehrdad
Abstract:
Awareness of effective land using issues in coastal area including protection of natural ecosystems and coastal environment due to the increasing of human life along the coast is of great importance. There are numerous valuable structures and heritages which are located in defence sites and waterfront area. Marine structures such as groins, sea walls and detached breakwaters are constructed in coast to improve the coast stability against bed erosion due to changing wave and climate pattern. Marine mechanisms and interaction with the shore protection structures need to be intensively studied. Groins are one of the most prominent structures that are used in shore protection to create a safe environment for coastal area by maintaining the land against progressive coastal erosion. The main structural function of a groin is to control the long shore current and littoral sediment transport. This structure can be submerged and provide the necessary beach protection without negative environmental impact. However, for submerged structures adopted for beach protection, the shoreline response to these structures is not well understood at present. Nowadays, modelling and computer simulation are used to assess beach morphology in the vicinity of marine structures to reduce their environmental impact. The objective of this study is to predict the beach morphology in the vicinity of submerged groins and comparison with non-submerged groins with focus on a part of the coast located in Dahane sar Sefidrood, Guilan province, Iran where serious coast erosion has occurred recently. The simulations were obtained using a one-line model which can be used as a first approximation of shoreline prediction in the vicinity of groins. The results of the proposed model are compared with field measurements to determine the shape of the coast. Finally, the results of the present study show that using submerged groins can have a good efficiency to control the beach erosion without causing severe environmental impact to the coast. The important outcome from this study can be employed in optimum designing of defence sites in the coastal cities to improve their efficiency in terms of re-using the heritage lands.Keywords: submerged structures, groin, shore protective structures, coastal cities
Procedia PDF Downloads 316181 Discovering Event Outliers for Drug as Commercial Products
Authors: Arunas Burinskas, Aurelija Burinskiene
Abstract:
On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.Keywords: drugs, Grubbs' test, outlier, shortage event
Procedia PDF Downloads 132180 Assessment of Metal Dynamics in Dissolved and Particulate Phase in Human Impacted Hooghly River Estuary, India
Authors: Soumita Mitra, Santosh Kumar Sarkar
Abstract:
Hooghly river estuary (HRE), situated at the north eastern part of Bay of Bengal has global significance due to its holiness. It is of immense importance to the local population as it gives perpetual water supply for various activities such as transportation, fishing, boating, bathing etc. to the local people who settled on both the banks of this estuary. This study was done to assess the dissolved and particulate trace metal in the estuary covering a stretch of about 175 Km. The water samples were collected from the surface (0-5 cm) along the salinity gradient and metal concentration were studied both in dissolved and particulate phase using Graphite Furnace Atomic Absorption Spectrophotometer (GF-AAS) along some physical characteristics such as water temperature, salinity, pH, turbidity and total dissolved solids. Although much significant spatial variation was noticed but little enrichment was found along the downstream of the estuary. The mean concentration of the metals in the dissolved and particulate phase followed the same trend and as follows: Fe>Mn>Cr>Zn>Cu>Ni>Pb. The concentration of the metals in the particulate phase were much greater than that in dissolved phase which was also depicted from the values of the partition coefficient (Kd)(ml mg-1). The Kdvalues ranged from 1.5x105 (in case of Pb) to 4.29x106 (in case of Cr). The high value of Kd for Cr denoted that the metal Cr is mostly bounded with the suspended particulate matter while the least value for Pb signified it presence more in dissolved phase. Moreover, the concentrations of all the studied metals in the dissolved phase were many folds higher than their respective permissible limits assested by WHO 2008, 2009 and 2011. On the other hand, according to Sediment Quality Guidelines (SQGs), Zn, Cu and Ni in the particulate phase lied between ERL and ERM values but Cr exceeded ERM values at all the stations confirming that the estuary is mostly contaminated with the particulate Cr and it might cause frequent adverse effects on the aquatic life. Multivariate statistics Cluster analysis was also performed which separated the stations according to the level of contamination from several point and nonpoint sources. Thus, it is found that the estuarine system is much polluted by the toxic metals and further investigation, toxicological studies should be implemented for full risk assessment of this system, better management and restoration of the water quality of this globally significant aquatic system.Keywords: dissolved and particulate phase, Hooghly river estuary, partition coefficient, surface water, toxic metals
Procedia PDF Downloads 279179 Simulation of Ester Based Mud Performance through Drilling Genting Timur Field
Authors: Lina Ismail Jassim, Robiah Yunus
Abstract:
To successfully drill oil or gas well, two main characteristics of numerous other tasks of an efficient drilling fluid are required, which are suspended and carrying cuttings from the beneath wellbore to the surface and managed between pore (formation) and hydrostatic pressure (mud pressure). Several factors like mud composition and its rheology, wellbore design, drilled cuttings characteristics and drilling string rotation contribute to drill wellbore successfully. Simulation model can support an appropriate indication on the drilling fluid performance in the real field as Genting Timur field, located in Pahang in Malaysia on 4295 m depth, held the world record in Sempah Muda 1 (Vertical). A detailed 3 dimensional CFD analysis of vertical, concentric annular two phase flow was developed to study and asses Herschel Bulkley drilling fluid. The effect of Hematite, Barite and calcium carbonates types and size of cutting rock particles on such flow is analyzed. The vertical flows are also associated with a good amount of temperature variation along the depth. This causes a good amount of change in viscosity of the fluid, which is non-Newtonian in nature. Good understanding of the nature of such flows is imperative in developing and maintaining successful vertical well systems. A detailed analysis of flow characteristics due to the drill pipe rotation is done in this work. The inner cylinder of the annulus gets different rotational speed, depending upon the operating conditions. This speed induces a good swirl on the particles and primary fluids which interpret in Ester based drilling fluid cleaning well ability, which in turn determines energy loss along the pipe. Energy loss is assessed in this work in terms of wall shear stress and pressure drop along the pipe. The flow is under an adverse pressure gradient condition, which causes chance of reversed flow and transfers the rock cuttings to the surface.Keywords: concentric annulus, non-Newtonian, two phase, Herschel Bulkley
Procedia PDF Downloads 308178 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies
Authors: Paolino Di Felice
Abstract:
The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.Keywords: quality of life, distance measurement error, Italian administrative units, spatial database
Procedia PDF Downloads 371177 Physiological Responses of Dominant Grassland Species to Different Grazing Intensity in Inner Mongolia, China
Authors: Min Liu, Jirui Gong, Qinpu Luo, Lili Yang, Bo Yang, Zihe Zhang, Yan Pan, Zhanwei Zhai
Abstract:
Grazing disturbance is one of the important land-use types that affect plant growth and ecosystem processes. In order to study the responses of dominant species to grazing in the semiarid temperate grassland of Inner Mongolia, we set five grazing intensity plots: a control and four levels of grazing (light (LG), moderate (MG), heavy (HG) and extreme heavy grazing (EHG)) to test the morphological and physiological responses of Stipa grandis, Leymus chinensis at the individual levels. With the increase of grazing intensity, Stipa grandis and Leymus chinensis both exhibited reduced plant height, leaf area, stem length and aboveground biomass, showing a significant dwarf phenomenon especially in HG and EHG plots. The photosynthetic capacity decreased along the grazing gradient. Especially in the MG plot, the two dominant species have lowest net photosynthetic rate (Pn) and water use efficiency (WUE). However, in the HG and EHG plots, the two species had high light saturation point (LSP) and low light compensation point (LCP), indicating they have high light-use efficiency. They showed a stimulation of compensatory photosynthesis to the remnant leaves as compared with grasses in MG plot. For Leymus chinensis, the lipid peroxidation level did not increase with the low malondialdehyde (MDA) content even in the EHG plot. It may be due to the high enzymes activity of superoxide dismutase (SOD) and peroxidase (POD) to reduce the damage of reactive oxygen species. Meanwhile, more carbohydrate was stored in the leaf of Leymus chinensis to provide energy to the plant regrowth. On the contrary, Stipa grandis showed the high level of lipid peroxidation especially in the HG and EHG plots with decreased antioxidant enzymes activity. The soluble protein content did not change significantly in the different plots. Therefore, with the increase of grazing intensity, plants changed morphological and physiological traits to defend themselves effectively to herbivores. Leymus chinensis is more resistant to grazing than Stipa grandis in terms of tolerance traits, particularly under heavy grazing pressure.Keywords: antioxidant enzymes activity, grazing density, morphological responses, photosynthesis
Procedia PDF Downloads 365176 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model
Authors: Sidrah Ahmed
Abstract:
The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients
Procedia PDF Downloads 203175 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept
Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua
Abstract:
River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel
Procedia PDF Downloads 126174 Mg Doped CuCrO₂ Thin Oxides Films for Thermoelectric Properties
Authors: I. Sinnarasa, Y. Thimont, L. Presmanes, A. Barnabé
Abstract:
The thermoelectricity is a promising technique to overcome the issues in recovering waste heat to electricity without using moving parts. In fact, the thermoelectric (TE) effect defines as the conversion of a temperature gradient directly into electricity and vice versa. To optimize TE materials, the power factor (PF = σS² where σ is electrical conductivity and S is Seebeck coefficient) must be increased by adjusting the carrier concentration, and/or the lattice thermal conductivity Kₜₕ must be reduced by introducing scattering centers with point defects, interfaces, and nanostructuration. The PF does not show the advantages of the thin film because it does not take into account the thermal conductivity. In general, the thermal conductivity of the thin film is lower than the bulk material due to their microstructure and increasing scattering effects with decreasing thickness. Delafossite type oxides CuᴵMᴵᴵᴵO₂ received main attention for their optoelectronic properties as a p-type semiconductor they exhibit also interesting thermoelectric (TE) properties due to their high electrical conductivity and their stability in room atmosphere. As there are few proper studies on the TE properties of Mg-doped CuCrO₂ thin films, we have investigated, the influence of the annealing temperature on the electrical conductivity and the Seebeck coefficient of Mg-doped CuCrO₂ thin films and calculated the PF in the temperature range from 40 °C to 220 °C. For it, we have deposited Mg-doped CuCrO₂ thin films on fused silica substrates by RF magnetron sputtering. This study was carried out on 300 nm thin films. The as-deposited Mg doped CuCrO₂ thin films have been annealed at different temperatures (from 450 to 650 °C) under primary vacuum. Electrical conductivity and Seebeck coefficient of the thin films have been measured from 40 to 220 °C. The highest electrical conductivity of 0.60 S.cm⁻¹ with a Seebeck coefficient of +329 µV.K⁻¹ at 40 °C have been obtained for the sample annealed at 550 °C. The calculated power factor of optimized CuCrO₂:Mg thin film was 6 µW.m⁻¹K⁻² at 40 °C. Due to the constant Seebeck coefficient and the increasing electrical conductivity with temperature it reached 38 µW.m⁻¹K⁻² at 220 °C that was a quite good result for an oxide thin film. Moreover, the degenerate behavior and the hopping mechanism of CuCrO₂:Mg thin film were elucidated. Their high and constant Seebeck coefficient in temperature and their stability in room atmosphere could be a great advantage for an application of this material in a high accuracy temperature measurement devices.Keywords: thermoelectric, oxides, delafossite, thin film, power factor, degenerated semiconductor, hopping mode
Procedia PDF Downloads 199173 The Effects of Subjective and Objective Indicators of Inequality on Life Satisfaction in a Comparative Perspective Using a Multi-Level Analysis
Authors: Atefeh Bagherianziarat, Dana Hamplova
Abstract:
The inverse social gradient in life satisfaction (LS) is a well-established research finding. To estimate the influence of inequality on LS, most of the studies have explored the effect of the objective aspects of inequality or individuals’ socioeconomic status (SES). However, relatively fewer studies have confirmed recently the significant effect of the subjective aspect of inequality or subjective socioeconomic status (SSS) on life satisfaction over and above SES. In other words, it is confirmed by some studies that individuals’ perception of their unequal status in society or SSS can moderate the impact of their absolute unequal status on their life satisfaction. Nevertheless, this newly confirmed moderating link has not been affirmed to work likewise in societies with different levels of social inequality and also for people who believe in the value of equality, at different levels. In this study, we compared the moderative influence of subjective inequality on the link between objective inequality and life satisfaction. In particular, we focus on differences across welfare state regimes based on Esping-Andersen's theory. Also, we explored the moderative role of believing in the value of equality on the link between objective and subjective inequality on LS in the given societies. Since our studied variables were measured at both individual and country levels, we applied a multilevel analysis to the European Social Survey data (round 9). The results showed that people in deferent regimes reported statistically meaningful different levels of life satisfaction that is explained to different extends by their household income and their perception of their income inequality. The findings of the study supported the previous findings of the moderator influence of perceived inequality on the link between objective inequality and LS. However, this link is different in various welfare state regimes. The results of the multilevel modeling showed that country-level subjective equality is a positive predictor for individuals’ life satisfaction, while the GINI coefficient that was considered as the indicator of absolute inequality has a smaller effect on life satisfaction. Also, country-level subjective equality moderates the confirmed link between individuals’ income and their life satisfaction. It can be concluded that both individual and country-level subjective inequality slightly moderate the effect of individuals’ income on their life satisfaction.Keywords: individual values, life satisfaction, multilevel analysis, objective inequality, subjective inequality, welfare regimes status
Procedia PDF Downloads 98172 Latitudinal Impact on Spatial and Temporal Variability of 7Be Activity Concentrations in Surface Air along Europe
Authors: M. A. Hernández-Ceballos, M. Marín-Ferrer, G. Cinelli, L. De Felice, T. Tollefsen, E. Nweke, P. V. Tognoli, S. Vanzo, M. De Cort
Abstract:
This study analyses the latitudinal impact of the spatial and temporal distribution on the cosmogenic isotope 7Be in surface air along Europe. The long-term database of the 6 sampling sites (Ivalo, Helsinki, Berlin, Freiburg, Sevilla and La Laguna), that regularly provide data to the Radioactivity Environmental Monitoring (REM) network managed by the Joint Research Centre (JRC) in Ispra, were used. The selection of the stations was performed attending to different factors, such as 1) heterogeneity in terms of latitude and altitude, and 2) long database coverage. The combination of these two parameters ensures a high degree of representativeness of the results. In the later, the temporal coverage varies between stations, being used in the present study sampling stations with a database more or less continuously from 1984 to 2011. The mean values of 7Be activity concentration presented a spatial distribution value ranging from 2.0 ± 0.9 mBq/m3 (Ivalo, north) to 4.8 ± 1.5 mBq/m3 (La Laguna, south). An increasing gradient with latitude was observed from the north to the south, 0.06 mBq/m3. However, there was no correlation with altitude, since all stations are sited within the atmospheric boundary layer. The analyses of the data indicated a dynamic range of 7Be activity for solar cycle and phase (maximum or minimum), having been observed different impact on stations according to their location. The results indicated a significant seasonal behavior, with the maximum concentrations occurring in the summer and minimum in the winter, although with differences in the values reached and in the month registered. Due to the large heterogeneity in the temporal pattern with which the individual radionuclide analyses were performed in each station, the 7Be monthly index was calculated to normalize the measurements and perform the direct comparison of monthly evolution among stations. Different intensity and evolution of the mean monthly index were observed. The knowledge of the spatial and temporal distribution of this natural radionuclide in the atmosphere is a key parameter for modeling studies of atmospheric processes, which are important phenomena to be taken into account in the case of a nuclear accident.Keywords: Berilium-7, latitudinal impact in Europe, seasonal and monthly variability, solar cycle
Procedia PDF Downloads 337171 Delineation of Subsurface Tectonic Structures Using Gravity, Magnetic and Geological Data, in the Sarir-Hameimat Arm of the Sirt Basin, NE Libya
Authors: Mohamed Abdalla Saleem, Hana Ellafi
Abstract:
The study area is located in the eastern part of the Sirt Basin, in the Sarir-Hameimat arm of the basin, south of Amal High. The area covers the northern part of the Hamemat Trough and the Rakb High. All of these tectonic elements are part of the major and common tectonics that were created when the old Sirt Arch collapsed, and most of them are trending NW-SE. This study has been conducted to investigate the subsurface structures and the sedimentology characterization of the area and attempt to define its development tectonically and stratigraphically. About 7600 land gravity measurements, 22500 gridded magnetic data, and petrographic core data from some wells were used to investigate the subsurface structural features both vertically and laterally. A third-order separation of the regional trends from the original Bouguer gravity data has been chosen. The residual gravity map reveals a significant number of high anomalies distributed in the area, separated by a group of thick sediment centers. The reduction to the pole magnetic map also shows nearly the same major trends and anomalies in the area. Applying the further interpretation filters reveals that these high anomalies are sourced from different depth levels; some are deep-rooted, and others are intruded igneous bodies within the sediment layers. The petrographic sedimentology study for some wells in the area confirmed the presence of these igneous bodies and defined their composition as most likely to be gabbro hosted by marine shale layers. Depth investigation of these anomalies by the average depth spectrum shows that the average basement depth is about 7.7 km, while the top of the intrusions is about 2.65 km, and some near-surface magnetic sources are about 1.86 km. The depth values of the magnetic anomalies and their location were estimated specifically using the 3D Euler deconvolution technique. The obtained results suggest that the maximum depth of the sources is about 4938m. The total horizontal gradient of the magnetic data shows that the trends are mostly extending NW-SE, others are NE-SW, and a third group has an N-S extension. This variety in trend direction shows that the area experienced different tectonic regimes throughout its geological history.Keywords: sirt basin, tectonics, gravity, magnetic
Procedia PDF Downloads 65170 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”
Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy
Abstract:
Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared togetherKeywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network
Procedia PDF Downloads 446169 Magnetic Biomaterials for Removing Organic Pollutants from Wastewater
Authors: L. Obeid, A. Bee, D. Talbot, S. Abramson, M. Welschbillig
Abstract:
The adsorption process is one of the most efficient methods to remove pollutants from wastewater provided that suitable adsorbents are used. In order to produce environmentally safe adsorbents, natural polymers have received increasing attention in recent years. Thus, alginate and chitosane are extensively used as inexpensive, non-toxic and efficient biosorbents. Alginate is an anionic polysaccharide extracted from brown seaweeds. Chitosan is an amino-polysaccharide; this cationic polymer is obtained by deacetylation of chitin the major constituent of crustaceans. Furthermore, it has been shown that the encapsulation of magnetic materials in alginate and chitosan beads facilitates their recovery from wastewater after the adsorption step, by the use of an external magnetic field gradient, obtained with a magnet or an electromagnet. In the present work, we have studied the adsorption affinity of magnetic alginate beads and magnetic chitosan beads (called magsorbents) for methyl orange (MO) (an anionic dye), methylene blue (MB) (a cationic dye) and p-nitrophenol (PNP) (a hydrophobic pollutant). The effect of different parameters (pH solution, contact time, pollutant initial concentration…) on the adsorption of pollutant on the magnetic beads was investigated. The adsorption of anionic and cationic pollutants is mainly due to electrostatic interactions. Consequently methyl orange is highly adsorbed by chitosan beads in acidic medium and methylene blue by alginate beads in basic medium. In the case of a hydrophobic pollutant, which is weakly adsorbed, we have shown that the adsorption is enhanced by adding a surfactant. Cetylpyridinium chloride (CPC), a cationic surfactant, was used to increase the adsorption of PNP by magnetic alginate beads. Adsorption of CPC by alginate beads occurs through two mechanisms: (i) electrostatic attractions between cationic head groups of CPC and negative carboxylate functions of alginate; (ii) interaction between the hydrocarbon chains of CPC. The hydrophobic pollutant is adsolubilized within the surface aggregated structures of surfactant. Figure c shows that PNP can reach up to 95% of adsorption in presence of CPC. At highest CPC concentrations, desorption occurs due to the formation of micelles in the solution. Our magsorbents appear to efficiently remove ionic and hydrophobic pollutants and we hope that this fundamental research will be helpful for the future development of magnetically assisted processes in water treatment plants.Keywords: adsorption, alginate, chitosan, magsorbent, magnetic, organic pollutant
Procedia PDF Downloads 257168 Understanding the Processwise Entropy Framework in a Heat-powered Cooling Cycle
Authors: P. R. Chauhan, S. K. Tyagi
Abstract:
Adsorption refrigeration technology offers a sustainable and energy-efficient cooling alternative over traditional refrigeration technologies for meeting the fast-growing cooling demands. With its ability to utilize natural refrigerants, low-grade heat sources, and modular configurations, it has the potential to revolutionize the cooling industry. Despite these benefits, the commercial viability of this technology is hampered by several fundamental limiting constraints, including its large size, low uptake capacity, and poor performance as a result of deficient heat and mass transfer characteristics. The primary cause of adequate heat and mass transfer characteristics and magnitude of exergy loss in various real processes of adsorption cooling system can be assessed by the entropy generation rate analysis, i. e. Second law of Thermodynamics. Therefore, this article presents the second law of thermodynamic-based investigation in terms of entropy generation rate (EGR) to identify the energy losses in various processes of the HPCC-based adsorption system using MATLAB R2021b software. The adsorption technology-based cooling system consists of two beds made up of silica gel and arranged in a single stage, while the water is employed as a refrigerant, coolant, and hot fluid. The variation in process-wise EGR is examined corresponding to cycle time, and a comparative analysis is also presented. Moreover, the EGR is also evaluated in the external units, such as the heat source and heat sink unit used for regeneration and heat dump, respectively. The research findings revealed that the combination of adsorber and desorber, which operates across heat reservoirs with a higher temperature gradient, shares more than half of the total amount of EGR. Moreover, the EGR caused by the heat transfer process is determined to be the highest, followed by a heat sink, heat source, and mass transfer, respectively. in case of heat transfer process, the operation of the valve is determined to be responsible for more than half (54.9%) of the overall EGR during the heat transfer. However, the combined contribution of the external units, such as the source (18.03%) and sink (21.55%), to the total EGR, is 35.59%. The analysis and findings of the present research are expected to pinpoint the source of the energy waste in HPCC based adsorption cooling systems.Keywords: adsorption cooling cycle, heat transfer, mass transfer, entropy generation, silica gel-water
Procedia PDF Downloads 107167 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74