Search results for: Large Eddy Simulation
8476 Environmental Radioactivity Analysis by a Sequential Approach
Authors: G. Medkour Ishak-Boushaki, A. Taibi, M. Allab
Abstract:
Quantitative environmental radioactivity measurements are needed to determine the level of exposure of a population to ionizing radiations and for the assessment of the associated risks. Gamma spectrometry remains a very powerful tool for the analysis of radionuclides present in an environmental sample but the basic problem in such measurements is the low rate of detected events. Using large environmental samples could help to get around this difficulty but, unfortunately, new issues are raised by gamma rays attenuation and self-absorption. Recently, a new method has been suggested, to detect and identify without quantification, in a short time, a gamma ray of a low count source. This method does not require, as usually adopted in gamma spectrometry measurements, a pulse height spectrum acquisition. It is based on a chronological record of each detected photon by simultaneous measurements of its energy ε and its arrival time τ on the detector, the pair parameters [ε,τ] defining an event mode sequence (EMS). The EMS serials are analyzed sequentially by a Bayesian approach to detect the presence of a given radioactive source. The main object of the present work is to test the applicability of this sequential approach in radioactive environmental materials detection. Moreover, for an appropriate health oversight of the public and of the concerned workers, the analysis has been extended to get a reliable quantification of the radionuclides present in environmental samples. For illustration, we consider as an example, the problem of detection and quantification of 238U. Monte Carlo simulated experience is carried out consisting in the detection, by a Ge(Hp) semiconductor junction, of gamma rays of 63 keV emitted by 234Th (progeny of 238U). The generated EMS serials are analyzed by a Bayesian inference. The application of the sequential Bayesian approach, in environmental radioactivity analysis, offers the possibility of reducing the measurements time without requiring large environmental samples and consequently avoids the attached inconvenient. The work is still in progress.Keywords: Bayesian approach, event mode sequence, gamma spectrometry, Monte Carlo method
Procedia PDF Downloads 5018475 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam
Procedia PDF Downloads 3908474 Orthogonal Basis Extreme Learning Algorithm and Function Approximation
Abstract:
A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.Keywords: neural network, orthogonal basis extreme learning, function approximation
Procedia PDF Downloads 5408473 Platform Virtual for Joint Amplitude Measurement Based in MEMS
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez
Abstract:
Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation
Procedia PDF Downloads 2628472 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases
Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner
Abstract:
Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.Keywords: aerosol processes, Brownian coagulation, gravitational settling, transport regulations
Procedia PDF Downloads 1218471 A Dynamical Study of Fractional Order Obesity Model by a Combined Legendre Wavelet Method
Authors: Hakiki Kheira, Belhamiti Omar
Abstract:
In this paper, we propose a new compartmental fractional order model for the simulation of epidemic obesity dynamics. Using the Legendre wavelet method combined with the decoupling and quasi-linearization technique, we demonstrate the validity and applicability of our model. We also present some fractional differential illustrative examples to demonstrate the applicability and efficiency of the method. The fractional derivative is described in the Caputo sense.Keywords: Caputo derivative, epidemiology, Legendre wavelet method, obesity
Procedia PDF Downloads 4248470 Effects of Different Thermal Processing Routes and Their Parameters on the Formation of Voids in PA6 Bonded Aluminum Joints
Authors: Muhammad Irfan, Guillermo Requena, Jan Haubrich
Abstract:
Adhesively bonded aluminum joints are common in automotive and aircraft industries and are one of the enablers of lightweight construction to minimize the carbon emissions during transportation for a sustainable life. This study is focused on the effects of two thermal processing routes, i.e., by direct and induction heating, and their parameters on void formation in PA6 bonded aluminum EN-AW6082 joints. The joints were characterized microanalytically as well as by lap shear experiments. The aging resistance of the joints was studied by accelerated aging tests at 80°C hot water. It was found that the processing of single lap joints by direct heating in a convection oven causes the formation of a large number of voids in the bond line. The formation of voids in the convection oven was due to longer processing times and was independent of any surface pretreatments of the metal as well as the processing temperature. However, when processing at low temperatures, a large number of small-sized voids were observed under the optical microscope, and they were larger in size but reduced in numbers at higher temperatures. An induction heating process was developed, which not only successfully reduced or eliminated the voids in PA6 bonded joints but also reduced the processing times for joining significantly. Consistent with the trend in direct heating, longer processing times and higher temperatures in induction heating also led to an increased formation of voids in the bond line. Subsequent single lap shear tests revealed that the increasing void contents led to a 21% reduction in lap shear strengths (i.e., from ~47 MPa for induction heating to ~37 MPa for direct heating). Also, there was a 17% reduction in lap shear strengths when the consolidation temperature was raised from 220˚C to 300˚C during induction heating. However, below a certain threshold of void contents, there was no observable effect on the lap shear strengths as well as on hydrothermal aging resistance of the joints consolidated by the induction heating process.Keywords: adhesive, aluminium, convection oven, induction heating, mechanical properties, nylon6 (PA6), pretreatment, void
Procedia PDF Downloads 1268469 Development of a Decision Model to Optimize Total Cost in Food Supply Chain
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation
Procedia PDF Downloads 2278468 Peak Shaving in Microgrids Using Hybrid Storage
Authors: Juraj Londák, Radoslav Vargic, Pavol Podhradský
Abstract:
In this contribution, we focus on the technical and economic aspects of using hybrid storage in microgrids for peak shaving. We perform a feasibility analysis of hybrid storage consisting of conventional supercapacitors and chemical batteries. We use multiple real-life consumption profiles from various industry-oriented microgrids. The primary purpose is to construct a digital twin model for reserved capacity simulation and prediction. The main objective is to find the equilibrium between technical innovations, acquisition costs and energy cost savingsKeywords: microgrid, peak shaving, energy storage, digital twin
Procedia PDF Downloads 1658467 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 3458466 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser
Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt
Abstract:
This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet
Procedia PDF Downloads 2168465 Investigation of Shear Strength, and Dilative Behavior of Coarse-grained Samples Using Laboratory Test and Machine Learning Technique
Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari
Abstract:
Coarse-grained soils are known and commonly used in a wide range of geotechnical projects, including high earth dams or embankments for their high shear strength. The most important engineering property of these soils is friction angle which represents the interlocking between soil particles and can be applied widely in designing and constructing these earth structures. Friction angle and dilative behavior of coarse-grained soils can be estimated from empirical correlations with in-situ testing and physical properties of the soil or measured directly in the laboratory performing direct shear or triaxial tests. Unfortunately, large-scale testing is difficult, challenging, and expensive and is not possible in most soil mechanic laboratories. So, it is common to remove the large particles and do the tests, which cannot be counted as an exact estimation of the parameters and behavior of the original soil. This paper describes a new methodology to simulate particles grading distribution of a well-graded gravel sample to a smaller scale sample as it can be tested in an ordinary direct shear apparatus to estimate the stress-strain behavior, friction angle, and dilative behavior of the original coarse-grained soil considering its confining pressure, and relative density using a machine learning method. A total number of 72 direct shear tests are performed in 6 different sizes, 3 different confining pressures, and 4 different relative densities. Multivariate Adaptive Regression Spline (MARS) technique was used to develop an equation in order to predict shear strength and dilative behavior based on the size distribution of coarse-grained soil particles. Also, an uncertainty analysis was performed in order to examine the reliability of the proposed equation.Keywords: MARS, coarse-grained soil, shear strength, uncertainty analysis
Procedia PDF Downloads 1678464 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept
Authors: Ahmed El Naggar, Homyan Saleh
Abstract:
Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy
Procedia PDF Downloads 968463 Single Stage “Fix and Flap” Orthoplastic Approach to Severe Open Tibial Fractures: A Systematic Review of the Outcomes
Authors: Taylor Harris
Abstract:
Gustilo-anderson grade III tibial fractures are exquisitely difficult injuries to manage as they require extensive soft tissue repair in addition to fracture fixation. These injuries are best managed collaboratively by Orthopedic and Plastic surgeons. While utilizing an Orthoplastics approach has decreased the rates of adverse outcomes in these injuries, there is a large amount of variation in exactly how an Orthoplastics team approaches complex cases such as these. It is sometimes recommended that definitive bone fixation and soft tissue coverage be completed simultaneously in a single-stage manner, but there is a paucity of large scale studies to provide evidence to support this recommendation. It is the aim of this study to report the outcomes of a single-stage "fix-and-flap" approach through a systematic review of the available literature. Hopefully, this better informs an evidence-based Orthoplastics approach to managing open tibial fractures. Systematic review of the literature was performed. Medline and Google Scholar were used and all studies published since 2000, in English were included. 103 studies were initially evaluated for inclusion. Reference lists of all included studies were also examined for potentially eligible studies. Gustilo grade III tibial shaft fractures in adults that were managed with a single-stage Orthoplastics approach were identified and evaluated with regard to outcomes of interest. Exclusion criteria included studies with patients <16 years old, case studies, systemic reviews, meta-analyses. Primary outcomes of interest were the rates of deep infections and rates of limb salvage. Secondary outcomes of interest included time to bone union, rates of non-union, and rates of re-operation. 15 studies were eligible. 11 of these studies reported rates of deep infection as an outcome, with rates ranging from 0.98%-20%. The pooled rate between studies was 7.34%. 7 studies reported rates of limb salvage with a range of 96.25%-100%. The pooled rate of the associated studies was 97.8%. 6 reported rates of non-union with a range of 0%-14%, a pooled rate of 6.6%. 6 reported time to bone union with a range of 24 to 40.3 weeks and a pooled average time of 34.2 weeks, and 4 reported rates of reoperation ranging from 7%-55%, with a pooled rate of 31.1%. A few studies that compared a single stage to a multi stage approach side-by-side unanimously favored the single stage approach. Outcomes of Gustilo grade III open tibial fractures utilizing an Orthoplastics approach that is specifically done in a single-stage produce low rates of adverse outcomes. Large scale studies of Orthoplastic collaboration that were not completed in strictly a single stage, or were completed in multiple stages, have not reported as favorable outcomes. We recommend that not only should Orthopedic surgeons and Plastic surgeons collaborate in the management of severe open tibial fracture, but they should plan to undergo definitive fixation and coverage in a single-stage for improved outcomes.Keywords: orthoplastic, gustilo grade iii, single-stage, trauma, systematic review
Procedia PDF Downloads 898462 Temperature Effect on Changing of Electrical Impedance and Permittivity of Ouargla (Algeria) Dunes Sand at Different Frequencies
Authors: Naamane Remita, Mohammed laïd Mechri, Nouredine Zekri, Smaïl Chihi
Abstract:
The goal of this study is the estimation real and imaginary components of both electrical impedance and permittivity z', z'' and ε', ε'' respectively, in Ouargla dunes sand at different temperatures and different frequencies, with alternating current (AC) equal to 1 volt, using the impedance spectroscopy (IS). This method is simple and non-destructive. the results can frequently be correlated with a number of physical properties, dielectric properties and the impacts of the composition on the electrical conductivity of solids. The experimental results revealed that the real part of impedance is higher at higher temperature in the lower frequency region and gradually decreases with increasing frequency. As for the high frequencies, all the values of the real part of the impedance were positive. But at low frequency the values of the imaginary part were positive at all temperatures except for 1200 degrees which were negative. As for the medium frequencies, the reactance values were negative at temperatures 25, 400, 200 and 600 degrees, and then became positive at the rest of the temperatures. At high frequencies of the order of MHz, the values of the imaginary part of the electrical impedance were in contrast to what we recorded for the middle frequencies. The results showed that the electrical permittivity decreases with increasing frequency, at low frequency we recorded permittivity values of 10+ 11, and at medium frequencies it was 10+ 07, while at high frequencies it was 10+ 02. The values of the real part of the electrical permittivity were taken large values at the temperatures of 200 and 600 degrees Celsius and at the lowest frequency, while the smallest value for the permittivity was recorded at the temperature of 400 degrees Celsius at the highest frequency. The results showed that there are large values of the imaginary part of the electrical permittivity at the lowest frequency and then it starts decreasing as the latter increases (the higher the frequency the lower the values of the imaginary part of the electrical permittivity). The character of electrical impedance variation indicated an opportunity to realize the polarization of Ouargla dunes sand and acquaintance if this compound consumes or produces energy. It’s also possible to know the satisfactory of equivalent electric circuit, whether it’s miles induction or capacitance.Keywords: electrical impedance, electrical permittivity, temperature, impedance spectroscopy, dunes sand ouargla
Procedia PDF Downloads 538461 Satellite Derived Evapotranspiration and Turbulent Heat Fluxes Using Surface Energy Balance System (SEBS)
Authors: Muhammad Tayyab Afzal, Muhammad Arslan, Mirza Muhammad Waqar
Abstract:
One of the key components of the water cycle is evapotranspiration (ET), which represents water consumption by vegetated and non-vegetated surfaces. Conventional techniques for measurements of ET are point based and representative of the local scale only. Satellite remote sensing data with large area coverage and high temporal frequency provide representative measurements of several relevant biophysical parameters required for estimation of ET at regional scales. The objective is of this research is to exploit satellite data in order to estimate evapotranspiration. This study uses Surface Energy Balance System (SEBS) model to calculate daily actual evapotranspiration (ETa) in Larkana District, Sindh Pakistan using Landsat TM data for clouds-free days. As there is no flux tower in the study area for direct measurement of latent heat flux or evapotranspiration and sensible heat flux, therefore, the model estimated values of ET were compared with reference evapotranspiration (ETo) computed by FAO-56 Penman Monteith Method using meteorological data. For a country like Pakistan, agriculture by irrigation in the river basins is the largest user of fresh water. For the better assessment and management of irrigation water requirement, the estimation of consumptive use of water for agriculture is very important because it is the main consumer of water. ET is yet an essential issue of water imbalance due to major loss of irrigation water and precipitation on cropland. As large amount of irrigated water is lost through ET, therefore its accurate estimation can be helpful for efficient management of irrigation water. Results of this study can be used to analyse surface conditions, i.e. temperature, energy budgets and relevant characteristics. Through this information we can monitor vegetation health and suitable agricultural conditions and can take controlling steps to increase agriculture production.Keywords: SEBS, remote sensing, evapotranspiration, ETa
Procedia PDF Downloads 3358460 GIS Mapping of Sheep Population and Distribution Pattern in the Derived Savannah of Nigeria
Authors: Sosina Adedayo O., Babyemi Olaniyi J.
Abstract:
The location, population, and distribution pattern of sheep are severe challenges to agribusiness investment and policy formulation in the livestock industry. There is a significant disconnect between farmers' needs and the policy framework towards ameliorating the sheep production constraints. Information on the population, production, and distribution pattern of sheep remains very scanty. A multi-stage sampling technique was used to elicit information from 180 purposively selected respondents from the study area comprised of Oluyole, Ona-ara, Akinyele, Egbeda, Ido and Ibarapa East LGA. The Global Positioning Systems (GPS) of the farmers' location (distribution), and average sheep herd size (Total Livestock Unit, TLU) (population) were recorded, taking the longitude and latitude of the locations in question. The recorded GPS data of the study area were transferred into the ARC-GIS. The ARC-GIS software processed the data using the ARC-GIS model 10.0. Sheep production and distribution (TLU) ranged from 4.1 (Oluyole) to 25.0 (Ibarapa East), with Oluyole, Akinyele, Ona-ara and Egbeda having TLU of 5, 7, 8 and 20, respectively. The herd sizes were classified as less than 8 (smallholders), 9-25 (medium), 26-50 (large), and above 50 (commercial). The majority (45%) of farmers were smallholders. The FR CP (%) ranged from 5.81±0.26 (cassava leaf) to 24.91±0.91 (Amaranthus spinosus), NDF (%) ranged from 22.38±4.43 (Amaranthus spinosus) to 67.96 ± 2.58 (Althemanthe dedentata) while ME ranged from 7.88±0.24 (Althemanthe dedentata) to 10.68±0.18 (cassava leaf). The smallholders’ sheep farmers were the majority, evenly distributed across rural areas due to the availability of abundant feed resources (crop residues, tree crops, shrubs, natural pastures, and feed ingredients) coupled with a large expanse of land in the study area. Most feed resources available were below sheep protein requirement level, hence supplementation is necessary for productivity. Bio-informatics can provide relevant information for sheep production for policy framework and intervention strategies.Keywords: sheep enterprise, agribusiness investment, policy, bio-informatics, ecological zone
Procedia PDF Downloads 908459 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.Keywords: in-memory database, disk-based system, hybrid database, concurrency control
Procedia PDF Downloads 4228458 Tailoring the Parameters of the Quantum MDS Codes Constructed from Constacyclic Codes
Authors: Jaskarn Singh Bhullar, Divya Taneja, Manish Gupta, Rajesh Kumar Narula
Abstract:
The existence conditions of dual containing constacyclic codes have opened a new path for finding quantum maximum distance separable (MDS) codes. Using these conditions parameters of length n=(q²+1)/2 quantum MDS codes were improved. A class of quantum MDS codes of length n=(q²+q+1)/h, where h>1 is an odd prime, have also been constructed having large minimum distance and these codes are new in the sense as these are not available in the literature.Keywords: hermitian construction, constacyclic codes, cyclotomic cosets, quantum MDS codes, singleton bound
Procedia PDF Downloads 3948457 Effects of Polymer Adsorption and Desorption on Polymer Flooding in Waterflooded Reservoir
Authors: Sukruthai Sapniwat, Falan Srisuriyachai
Abstract:
Polymer Flooding is one of the most well-known methods in Enhanced Oil Recovery (EOR) technology which can be implemented after either primary or secondary recovery, resulting in favorable conditions for the displacement mechanism in order to lower the residual oil in the reservoir. Polymer substances can lower the mobility ratio of the whole process by increasing the viscosity of injected water. Therefore, polymer flooding can increase volumetric sweep efficiency, which leads to a better recovery factor. Moreover, polymer adsorption onto rock surface can help decrease reservoir permeability contrast with high heterogeneity. Due to the reduction of the absolute permeability, effective permeability to water, representing flow ability of the injected fluid, is also reduced. Once polymer is adsorbed onto rock surface, polymer molecule can be desorbed when different fluids are injected. This study is performed to evaluate the effects of the adsorption and desorption process of polymer solutions to yield benefits on the oil recovery mechanism. A reservoir model is constructed by reservoir simulation program called STAR® commercialized by the Computer Modeling Group (CMG). Various polymer concentrations, starting times of polymer flooding process and polymer injection rates were evaluated with selected values of polymer desorption degrees including 0, 25, 50, 75 and 100%. The higher the value, the more adsorbed polymer molecules to return back to flowing fluid. According to the results, polymer desorption lowers polymer consumption, especially at low concentrations. Furthermore, starting time of polymer flooding and injection rate affect the oil production. The results show that waterflooding followed by earlier polymer flooding can increase the oil recovery factor while the higher injection rate also enhances the recovery. Polymer concentration is related to polymer consumption due to the two main benefits of polymer flooding control described above. Therefore, polymer slug size should be optimized based on polymer concentration. Polymer desorption causes polymer re-employment that is previously adsorbed onto rock surface, resulting in an increase of sweep efficiency in the further period of polymer flooding process. Even though waterflooding supports polymer injectivity, water cut at the producer can prematurely terminate the oil production. The injection rate decreases polymer adsorption due to decreased retention time of polymer flooding process.Keywords: enhanced oil recovery technology, polymer adsorption and desorption, polymer flooding, reservoir simulation
Procedia PDF Downloads 3368456 The Impact of Heat Waves on Human Health: State of Art in Italy
Authors: Vito Telesca, Giuseppina A. Giorgio
Abstract:
The earth system is subject to a wide range of human activities that have changed the ecosystem more rapidly and extensively in the last five decades. These global changes have a large impact on human health. The relationship between extreme weather events and mortality are widely documented in different studies. In particular, a number of studies have investigated the relationship between climatological variations and the cardiovascular and respiratory system. The researchers have become interested in the evaluation of the effect of environmental variations on the occurrence of different diseases (such as infarction, ischemic heart disease, asthma, respiratory problems, etc.) and mortality. Among changes in weather conditions, the heat waves have been used for investigating the association between weather conditions and cardiovascular events and cerebrovascular, using thermal indices, which combine air temperature, relative humidity, and wind speed. The effects of heat waves on human health are mainly found in the urban areas and they are aggravated by the presence of atmospheric pollution. The consequences of these changes for human health are of growing concern. In particular, meteorological conditions are one of the environmental aspects because cardiovascular diseases are more common among the elderly population, and such people are more sensitive to weather changes. In addition, heat waves, or extreme heat events, are predicted to increase in frequency, intensity, and duration with climate change. In this context, are very important public health and climate change connections increasingly being recognized by the medical research, because these might help in informing the public at large. Policy experts claim that a growing awareness of the relationships of public health and climate change could be a key in breaking through political logjams impeding action on mitigation and adaptation. The aims of this study are to investigate about the importance of interactions between weather variables and your effects on human health, focusing on Italy. Also highlighting the need to define strategies and practical actions of monitoring, adaptation and mitigation of the phenomenon.Keywords: climate change, illness, Italy, temperature, weather
Procedia PDF Downloads 2508455 Self-Energy Sufficiency Assessment of the Biorefinery Annexed to a Typical South African Sugar Mill
Authors: M. Ali Mandegari, S. Farzad, , J. F. Görgens
Abstract:
Sugar is one of the main agricultural industries in South Africa and approximately livelihoods of one million South Africans are indirectly dependent on sugar industry which is economically struggling with some problems and should re-invent in order to ensure a long-term sustainability. Second generation biorefinery is defined as a process to use waste fibrous for the production of biofuel, chemicals animal food, and electricity. Bioethanol is by far the most widely used biofuel for transportation worldwide and many challenges in front of bioethanol production were solved. Biorefinery annexed to the existing sugar mill for production of bioethanol and electricity is proposed to sugar industry and is addressed in this study. Since flowsheet development is the key element of the bioethanol process, in this work, a biorefinery (bioethanol and electricity production) annexed to a typical South African sugar mill considering 65ton/h dry sugarcane bagasse and tops/trash as feedstock was simulated. Aspen PlusTM V8.6 was applied as simulator and realistic simulation development approach was followed to reflect the practical behaviour of the plant. Latest results of other researches considering pretreatment, hydrolysis, fermentation, enzyme production, bioethanol production and other supplementary units such as evaporation, water treatment, boiler, and steam/electricity generation units were adopted to establish a comprehensive biorefinery simulation. Steam explosion with SO2 was selected for pretreatment due to minimum inhibitor production and simultaneous saccharification and fermentation (SSF) configuration was adopted for enzymatic hydrolysis and fermentation of cellulose and hydrolyze. Bioethanol purification was simulated by two distillation columns with side stream and fuel grade bioethanol (99.5%) was achieved using molecular sieve in order to minimize the capital and operating costs. Also boiler and steam/power generation were completed using industrial design data. Results indicates that the annexed biorefinery can be self-energy sufficient when 35% of feedstock (tops/trash) bypass the biorefinery process and directly be loaded to the boiler to produce sufficient steam and power for sugar mill and biorefinery plant.Keywords: biorefinery, self-energy sufficiency, tops/trash, bioethanol, electricity
Procedia PDF Downloads 5438454 On Coverage Probability of Confidence Intervals for the Normal Mean with Known Coefficient of Variation
Authors: Suparat Niwitpong, Sa-aat Niwitpong
Abstract:
Statistical inference of normal mean with known coefficient of variation has been investigated recently. This phenomenon occurs normally in environment and agriculture experiments when the scientist knows the coefficient of variation of their experiments. In this paper, we constructed new confidence intervals for the normal population mean with known coefficient of variation. We also derived analytic expressions for the coverage probability of each confidence interval. To confirm our theoretical results, Monte Carlo simulation will be used to assess the performance of these intervals based on their coverage probabilities.Keywords: confidence interval, coverage probability, expected length, known coefficient of variation
Procedia PDF Downloads 3998453 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach
Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi
Abstract:
Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems
Procedia PDF Downloads 2958452 Design and Implementation of Grid-Connected Photovoltaic Inverter
Authors: B. H. Lee
Abstract:
Nowadays, a grid-connected photovoltaic (PV) inverter is adopted in various places like as home, factory, because grid-connected PV inverter can reduce total power consumption by supplying electricity from PV array. In this paper, design and implementation of a 300 W grid-connected PV inverter are described. It is implemented with TI Piccolo DSP core and operated at 100 kHz switching frequency in order to reduce harmonic contents. The maximum operating input voltage is up to 45 V. The characteristics of the designed system that include maximum power point tracking (MPPT), single operation and battery charging are verified by simulation and experimental results.Keywords: design, grid-connected, implementation, photovoltaic
Procedia PDF Downloads 4248451 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 1038450 Mycotoxin Bioavailability in Sparus Aurata Muscle After Human Digestion and Intestinal Transport (Caco-2/HT-29 Cells) Simulation
Authors: Cheila Pereira, Sara C. Cunha, Miguel A. Faria, José O. Fernandes
Abstract:
The increasing world population brings several concerns, one of which is food security and sustainability. To meet this challenge, aquaculture, the farming of aquatic animals and plants, including fish, mollusks, bivalves, and algae, has experienced sustained growth and development in recent years. Recent advances in this industry have focused on reducing its economic and environmental costs, for example, the substitution of protein sources in fish feed. Plant-based proteins are now a common approach, and while it is a greener alternative to animal-based proteins, there are some disadvantages, such as their putative content and intoxicants such as mycotoxins. These are naturally occurring plant contaminants, and their exposure in fish can cause health problems, stunted growth or even death, resulting in economic losses for the producers and health concerns for the consumers. Different works have demonstrated the presence of both AFB1 (aflatoxin B1) and ENNB1 (enniatin B1) in fish feed and their capacity to be absorbed and bioaccumulate in the fish organism after digestion, further reaching humans through fish ingestion. The aim of this work was to evaluate the bioaccessibility of both mycotoxins in samples of Sparus aurata muscle using a static digestion model based on the INFOGEST protocol. The samples were subjected to different cooking procedures – raw, grilled and fried – and different seasonings – none, thyme and ginger – in order to evaluate their potential reduction effect on mycotoxins bioaccessibility, followed by the evaluation of the intestinal transport of both compounds with an in vitro cell model composed of Caco-2/HT-29 co-culture monolayers, simulating the human intestinal epithelium. The bioaccessible fractions obtained in the digestion studies were used in the transport studies for a more realistic approach to bioavailability evaluation. Results demonstrated the effect of the use of different cooking procedures and seasoning on the toxin's bioavailability. Sparus aurata was chosen in this study for its large production in aquaculture and high consumption in Europe. Also, with the continued evolution of fish farming practices and more common usage of novel feed ingredients based on plants, there is a growing concern about less studied contaminants in aquaculture and their consequences for human health. In pair with greener advances in this industry, there is a convergence towards alternative research methods, such as in vitro applications. In the case of bioavailability studies, both in vitro digestion protocols and intestinal transport assessment are excellent alternatives to in vivo studies. These methods provide fast, reliable and comparable results without ethical restraints.Keywords: AFB1, aquaculture, bioaccessibility, ENNB1, intestinal transport.
Procedia PDF Downloads 708449 Study on the Impact of Default Converter on the Quality of Energy Produced by DFIG Based Wind Turbine
Authors: N. Zerzouri, N. Benalia, N. Bensiali
Abstract:
This work is devoted to an analysis of the operation of a doubly fed induction generator (DFIG) integrated with a wind system. The power transfer between the stator and the network is carried out by acting on the rotor via a bidirectional signal converter. The analysis is devoted to the study of a fault in the converter due to an interruption of the control of a semiconductor. Simulation results obtained by the MATLAB/Simulink software illustrate the quality of the power generated at the default.Keywords: doubly fed induction generator (DFIG), wind energy, PWM inverter, modeling
Procedia PDF Downloads 3218448 Seamless Mobility in Heterogeneous Mobile Networks
Authors: Mohab Magdy Mostafa Mohamed
Abstract:
The objective of this paper is to introduce a vertical handover (VHO) algorithm between wireless LANs (WLANs) and LTE mobile networks. The proposed algorithm is based on the fuzzy control theory and takes into consideration power level, subscriber velocity, and target cell load instead of only power level in traditional algorithms. Simulation results show that network performance in terms of number of handovers and handover occurrence distance is improved.Keywords: vertical handover, fuzzy control theory, power level, speed, target cell load
Procedia PDF Downloads 3588447 Reliability-Simulation of Composite Tubular Structure under Pressure by Finite Elements Methods
Authors: Abdelkader Hocine, Abdelhakim Maizia
Abstract:
The exponential growth of reinforced fibers composite materials use has prompted researchers to step up their work on the prediction of their reliability. Owing to differences between the properties of the materials used for the composite, the manufacturing processes, the load combinations and types of environment, the prediction of the reliability of composite materials has become a primary task. Through failure criteria, TSAI-WU and the maximum stress, the reliability of multilayer tubular structures under pressure is the subject of this paper, where the failure probability of is estimated by the method of Monte Carlo.Keywords: composite, design, monte carlo, tubular structure, reliability
Procedia PDF Downloads 470