Search results for: gamma conditional distributions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1269

Search results for: gamma conditional distributions

939 Efficiency and Equity in Italian Secondary School

Authors: Giorgia Zotti

Abstract:

This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.

Keywords: inequality, education, efficiency, DEA approach

Procedia PDF Downloads 54
938 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 152
937 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 283
936 Surface Pressure Distributions for a Forebody Using Pressure Sensitive Paint

Authors: Yi-Xuan Huang, Kung-Ming Chung, Ping-Han Chung

Abstract:

Pressure sensitive paint (PSP), which relies on the oxygen quenching of a luminescent molecule, is an optical technique used in wind-tunnel models. A full-field pressure pattern with low aerodynamic interference can be obtained, and it is becoming an alternative to pressure measurements using pressure taps. In this study, a polymer-ceramic PSP was used, using toluene as a solvent. The porous particle and polymer were silica gel (SiO₂) and RTV-118 (3g:7g), respectively. The compound was sprayed onto the model surface using a spray gun. The absorption and emission spectra for Ru(dpp) as a luminophore were respectively 441-467 nm and 597 nm. A Revox SLG-55 light source with a short-pass filter (550 nm) and a 14-bit CCD camera with a long-pass (600 nm) filter were used to illuminate PSP and to capture images. This study determines surface pressure patterns for a forebody of an AGARD B model in a compressible flow. Since there is no experimental data for surface pressure distributions available, numerical simulation is conducted using ANSYS Fluent. The lift and drag coefficients are calculated and in comparison with the data in the open literature. The experiments were conducted using a transonic wind tunnel at the Aerospace Science and Research Center, National Cheng Kung University. The freestream Mach numbers were 0.83, and the angle of attack ranged from -4 to 8 degree. Deviation between PSP and numerical simulation is within 5%. However, the effect of the setup of the light source should be taken into account to address the relative error.

Keywords: pressure sensitive paint, forebody, surface pressure, compressible flow

Procedia PDF Downloads 106
935 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 148
934 Establishment of Reference Interval for Serum Protein Electrophoresis of Apparently Healthy Adults in Addis Ababa, Ethiopia

Authors: Demiraw Bikila, Tadesse Lejisa, Yosef Tolcha, Chala Bashea, Mehari Meles Tigist Getahun Genet Ashebir, Wossene Habtu, Feyissa Challa, Ousman Mohammed, Melkitu Kassaw, Adisu Kebede, Letebrhan G. Egzeabher, Endalkachew Befekadu, Mistire Wolde, Aster Tsegaye

Abstract:

Background: Even though several factors affect reference intervals (RIs), the company-derived values are currently in use in many laboratories worldwide. However, little or no data is available regarding serum protein RIs, mainly in resource-limited setting countries like Ethiopia. Objective: To establish a reference interval for serum protein electrophoresis of apparently healthy adults in Addis Ababa, Ethiopia. Method: A cross-sectional study was conducted on a total of 297 apparently healthy adults from April-October 2019 in four selected sub-cities (Akaki, Kirkos, Arada, Yeka) of Addis Ababa, Ethiopia. Laboratory analysis of collected samples was performed using Capillarys 2 Flex Piercing analyzer, while statistical analysis was done using SPSS version 23 and med-cal software. Mann-Whitney test was used to check Partitions. Non-parametric method of reference range establishment was performed as per CLSI guideline EP28A3C. Result: The established RIs were: Albumin 53.83-64.59%, 52.24-63.55%; Alpha-1 globulin 3.04-5.40%, 3.44-5.60%; Alpha-2 globulin 8.0-12.67%, 8.44-12.87%; and Beta-1 globulin 5.01-7.38%, 5.14-7.86%. Moreover, Albumin to globulin ratio was 1.16-1.8, 1.09-1.74 for males and females, respectively. The combined RIs for Beta-2 globulin and Gamma globulin were 2.54-4.90% and 12.40-21.66%, respectively. Conclusion: The established reference interval for serum protein fractions revealed gender-specific differences except for Beta-2 globulin and Gamma globulin.

Keywords: serum protein electrophoresis, reference interval, Addis Ababa, Ethiopia

Procedia PDF Downloads 208
933 Simulation Studies of High-Intensity, Nanosecond Pulsed Electric Fields Induced Dynamic Membrane Electroporation

Authors: Jiahui Song

Abstract:

The application of an electric field can cause poration at cell membranes. This includes the outer plasma membrane, as well as the membranes of intracellular organelles. In order to analyze and predict such electroporation effects, it becomes necessary to first evaluate the electric fields and the transmembrane voltages. This information can then be used to assess changes in the pore formation energy that finally yields the pore distributions and their radii based on the Smolchowski equation. The dynamic pore model can be achieved by including a dynamic aspect and a dependence on the pore population density into the pore formation energy equation. These changes make the pore formation energy E(r) self-adjusting in response to pore formation without causing uncontrolled growth and expansion. By using dynamic membrane tension, membrane electroporation in response to a 180kV/cm trapezoidal pulse with a 10 ns on time and 1.5 ns rise- and fall-times is discussed. Poration is predicted to occur at times beyond the peak at around 9.2 ns. Modeling also yields time-dependent distributions of the membrane pore population after multiple pulses. It shows that the pore distribution shifts to larger values of the radius with multiple pulsing. Molecular dynamics (MD) simulations are also carried out for a fixed field of 0.5 V/nm to demonstrate nanopore formation from a microscopic point of view. The result shows that the pore is predicted to be about 0.9 nm in diameter and somewhat narrower at the central point.

Keywords: high-intensity, nanosecond, dynamics, electroporation

Procedia PDF Downloads 138
932 Interaction Effects of Dietary Ginger, Zingiber Officinale, on Plasma Protein Fractions in Rainbow Trout, Oncorhynchus Mykiss

Authors: Ali Taheri Mirghaed, Sara Ahani, Ashkan Zargar, Seyyed Morteza Hoseini

Abstract:

Diseases are the major challenges in intensive aquaculture that cause significant annual losses. Antibiotic-therapy is a common way to control bacterial disease in fish, and oxytetracycline (OTC) is the only oral antibiotic in aquaculture approved FDA. OTC has been found to have negative effects on fish, such as oxidative stress and immune-suppression, thus, it is necessary to mitigate such effects. Medicinal herbs have various benefits on fish, including antioxidant, immunostimulant, and anti-microbial effects. Therefore, we hypothesized if dietary ginger meal (GM) interacts with dietary OTC by monitoring plasma protein fractions in rainbow trout. The study was conducted as a 2 × 2 factorial design, including diets containing 0 and 1% GM and 0 and 1.66 % OTC (corresponding to 100 mg/kg fish biomass per day). After ten days treating the fish (60 g individual weight) with these feeds, blood samples were taken from al treatments (n =3). Plasma was separated by centrifugation, and protein fractions were determined by electrophoresis. The results showed that OTC and GM had interaction effects on total protein (P<0.001), albumin (P<0.001), alpha-1 fraction (P=0.010), alpha-2 fraction (P=0.001), beta-2 fraction (P=0.014), and gamma fraction (P<0.001). Beta-1 fraction was significantly (P=0.030) affected by dietary GM. GM decreased plasma total protein, albumin, and beta-2 but increased beta-1 fraction. OTC significantly decreased total protein (P<0.001), albumin (P=0.001), alpha-2 fraction (P<0.001), beta-2 fraction (P=0.004), and gamma fraction (P<0.001) but had no significant effects on alpha-1 and beta-1 fractions. Dietary GM inhibited/suppressed the effects of dietary OTC on the plasma total protein and protein fractions. In conclusion, adding 1% GM to diet can mitigate the negative effects of dietary OTC on plasma proteins. Thus, GM may boost health of rainbow trout during the period of medication with OTC.

Keywords: ginger, plasma protein electrophoresis, dietary additive, rainbow trout

Procedia PDF Downloads 66
931 [Keynote Speech]: Determination of Naturally Occurring and Artificial Radionuclide Activity Concentrations in Marine Sediments in Western Marmara, Turkey

Authors: Erol Kam, Z. U. Yümün

Abstract:

Natural and artificial radionuclides cause radioactive contamination in environments, just as the other non-biodegradable pollutants (heavy metals, etc.) sink to the sea floor and accumulate in sediments. Especially the habitat of benthic foraminifera living on the surface of sediments or in sediments at the seafloor are affected by radioactive pollution in the marine environment. Thus, it is important for pollution analysis to determine the radionuclides. Radioactive pollution accumulates in the lowest level of the food chain and reaches humans at the highest level. The more the accumulation, the more the environment is endangered. This study used gamma spectrometry to investigate the natural and artificial radionuclide distribution of sediment samples taken from living benthic foraminifera habitats in the Western Marmara Sea. The radionuclides, K-40, Cs-137, Ra-226, Mn 54, Zr-95+ and Th-232, were identified in the sediment samples. For this purpose, 18 core samples were taken from depths of about 25-30 meters in the Marmara Sea in 2016. The locations of the core samples were specifically selected exclusively from discharge points for domestic and industrial areas, port locations, and so forth to represent pollution in the study area. Gamma spectrometric analysis was used to determine the radioactive properties of sediments. The radionuclide concentration activity values in the sediment samples obtained were Cs-137=0.9-9.4 Bq/kg, Th-232=18.9-86 Bq/kg, Ra-226=10-50 Bq/kg, K-40=24.4–670 Bq/kg, Mn 54=0.71–0.9 Bq/kg and Zr-95+=0.18–0.19 Bq/kg. These values were compared with the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) data, and an environmental analysis was carried out. The Ra-226 series, the Th-232 series, and the K-40 radionuclides accumulate naturally and are increasing every day due to anthropogenic pollution. Although the Ra-226 values obtained in the study areas remained within normal limits according to the UNSCEAR values, the K-40, and Th-232 series values were found to be high in almost all the locations.

Keywords: Ra-226, Th-232, K-40, Cs-137, Mn 54, Zr-95+, radionuclides, Western Marmara Sea

Procedia PDF Downloads 397
930 Analysis of a Discrete-time Geo/G/1 Queue Integrated with (s, Q) Inventory Policy at a Service Facility

Authors: Akash Verma, Sujit Kumar Samanta

Abstract:

This study examines a discrete-time Geo/G/1 queueing-inventory system attached with (s, Q) inventory policy. Assume that the customers follow the Bernoulli process on arrival. Each customer demands a single item with arbitrarily distributed service time. The inventory is replenished by an outside supplier, and the lead time for the replenishment is determined by a geometric distribution. There is a single server and infinite waiting space in this facility. Demands must wait in the specified waiting area during a stock-out period. The customers are served on a first-come-first-served basis. With the help of the embedded Markov chain technique, we determine the joint probability distributions of the number of customers in the system and the number of items in stock at the post-departure epoch using the Matrix Analytic approach. We relate the system length distribution at post-departure and outside observer's epochs to determine the joint probability distribution at the outside observer's epoch. We use probability distributions at random epochs to determine the waiting time distribution. We obtain the performance measures to construct the cost function. The optimum values of the order quantity and reordering point are found numerically for the variety of model parameters.

Keywords: discrete-time queueing inventory model, matrix analytic method, waiting-time analysis, cost optimization

Procedia PDF Downloads 12
929 Factorization of Computations in Bayesian Networks: Interpretation of Factors

Authors: Linda Smail, Zineb Azouz

Abstract:

Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks.

Keywords: Bayesian networks, D-Separation, level two Bayesian networks, factorization of computation

Procedia PDF Downloads 503
928 Biophysical Characterization of the Inhibition of cGAS-DNA Sensing by KicGAS, Kaposi's Sarcoma-Associated Herpesvirus Inhibitor of cGAS

Authors: D. Bhowmik, Y. Tian, Q. Yin, F. Zhu

Abstract:

Cyclic GMP-AMP synthase (cGAS), recognises cytoplasmic double-stranded DNA (dsDNA), indicative of bacterial and viral infections, as well as the leakage of self DNA by cellular dysfunction and stresses, to elicit the host's immune responses. Viruses also have developed numerous strategies to antagonize the cGAS-STING pathway. Kaposi's sarcoma-associated herpesvirus (KSHV) is a human DNA tumor virus that is the causative agent of Kaposi’s sarcoma and several other malignancies. To persist in the host, consequently causing diseases, KSHV must overcome the host innate immune responses, including the cGAS-STING DNA sensing pathway. We already found that ORF52 or KicGAS (KSHV inhibitor of cGAS), an abundant and basic gamma herpesvirus-conserved tegument protein, directly inhibits cGAS enzymatic activity. To better understand the mechanism, we have performed the biochemical and structural characterization of full-length KicGAS and various mutants in regarding binding to DNA. We observed that KicGAS is capable of self-association and identified the critical residues involved in the oligomerization process. We also characterized the DNA-binding of KicGAS and found that KicGAS cooperatively oligomerizes along the length of the double stranded DNA, the highly conserved basic residues at the c-terminal disordered region are crucial for DNA recognition. Deficiency in oligomerization also affects DNA binding. Thus DNA binding by KicGAS sequesters DNA and prevents it from being detected by cGAS, consequently inhibiting cGAS activation. KicGAS homologues also inhibit cGAS efficiently, suggesting inhibition of cGAS is evolutionarily conserved mechanism among gamma herpesvirus. These results highlight the important viral strategy to evade this innate immune sensor.

Keywords: Kaposi's sarcoma-associated herpesvirus, KSHV, cGAS, DNA binding, inhibition

Procedia PDF Downloads 112
927 Application of Particle Image Velocimetry in the Analysis of Scale Effects in Granular Soil

Authors: Zuhair Kadhim Jahanger, S. Joseph Antony

Abstract:

The available studies in the literature which dealt with the scale effects of strip footings on different sand packing systematically still remain scarce. In this research, the variation of ultimate bearing capacity and deformation pattern of soil beneath strip footings of different widths under plane-strain condition on the surface of loose, medium-dense and dense sand have been systematically studied using experimental and noninvasive methods for measuring microscopic deformations. The presented analyses are based on model scale compression test analysed using Particle Image Velocimetry (PIV) technique. Upper bound analysis of the current study shows that the maximum vertical displacement of the sand under the ultimate load increases for an increase in the width of footing, but at a decreasing rate with relative density of sand, whereas the relative vertical displacement in the sand decreases for an increase in the width of the footing. A well agreement is observed between experimental results for different footing widths and relative densities. The experimental analyses have shown that there exists pronounced scale effect for strip surface footing. The bearing capacity factors rapidly decrease up to footing widths B=0.25 m, 0.35 m, and 0.65 m for loose, medium-dense and dense sand respectively, after that there is no significant decrease in . The deformation modes of the soil as well as the ultimate bearing capacity values have been affected by the footing widths. The obtained results could be used to improve settlement calculation of the foundation interacting with granular soil.

Keywords: DPIV, granular mechanics, scale effect, upper bound analysis

Procedia PDF Downloads 129
926 Simulation Of A Renal Phantom Using the MAG 3

Authors: Ati Moncef

Abstract:

We describe in this paper the results of a phantom of dynamics renal with MAG3. Our phantom consisted of (tow shaped of kidneys, 1 liver). These phantoms were scanned with static and dynamic protocols and compared with clinical data. in a normal conditions we use our phantoms it's possible to acquire a renal images when we can be compared with clinical scintigraphy. In conclusion, Renal phantom also can use in the quality control of a renal scintigraphy.

Keywords: Renal scintigraphy, MAG3, Nuclear medicine, Gamma Camera.

Procedia PDF Downloads 382
925 Study of Natural Radioactive and Radiation Hazard Index of Soil from Sembrong Catchment Area, Johor, Malaysia

Authors: M. I. A. Adziz, J. Sharib Sarip, M. T. Ishak, D. N. A. Tugi

Abstract:

Radiation exposure to humans and the environment is caused by natural radioactive material sources. Given that exposure to people and communities can occur through several pathways, it is necessary to pay attention to the increase in naturally radioactive material, particularly in the soil. Continuous research and monitoring on the distribution and determination of these natural radionuclides' activity as a guide and reference are beneficial, especially in an accidental exposure. Surface soil/sediment samples from several locations identified around the Sembrong catchment area were taken for the study. After 30 days of secular equilibrium with their daughters, the activity concentrations of the naturally occurring radioactive material (NORM) members, i.e. ²²⁶Ra, ²²⁸Ra, ²³⁸U, ²³²Th, and ⁴⁰K, were measured using high purity germanium (HPGe) gamma spectrometer. The results obtained showed that the radioactivity concentration of ²³⁸U ranged between 17.13 - 30.13 Bq/kg, ²³²Th ranged between 22.90 - 40.05 Bq/kg, ²²⁶Ra ranged between 19.19 - 32.10 Bq/kg, ²²⁸Ra ranged between 21.08 - 39.11 Bq/kg and ⁴⁰K ranged between 9.22 - 51.07 Bq/kg with average values of 20.98 Bq/kg, 27.39 Bq/kg, 23.55 Bq/kg, 26.93 Bq/kg and 23.55 Bq/kg respectively. The values obtained from this study were low or equivalent to previously reported in previous studies. It was also found that the mean/mean values obtained for the four parameters of the Radiation Hazard Index, namely radium equivalent activity (Raeq), external dose rate (D), annual effective dose and external hazard index (Hₑₓ), were 65.40 Bq/kg, 29.33 nGy/h, 19.18 ¹⁰⁻⁶Sv and 0.19 respectively. These obtained values are low compared to the world average values and the values of globally applied standards. Comparison with previous studies (dry season) also found that the values for all four parameters were low and equivalent. This indicates the level of radiation hazard in the area around the study is safe for the public.

Keywords: catchment area, gamma spectrometry, naturally occurring radioactive material (NORM), soil

Procedia PDF Downloads 78
924 Linear Decoding Applied to V5/MT Neuronal Activity on Past Trials Predicts Current Sensory Choices

Authors: Ben Hadj Hassen Sameh, Gaillard Corentin, Andrew Parker, Kristine Krug

Abstract:

Perceptual decisions about sequences of sensory stimuli often show serial dependence. The behavioural choice on one trial is often affected by the choice on previous trials. We investigated whether the neuronal signals in extrastriate visual area V5/MT on preceding trials might influence choice on the current trial and thereby reveal the neuronal mechanisms of sequential choice effects. We analysed data from 30 single neurons recorded from V5/MT in three Rhesus monkeys making sequential choices about the direction of rotation of a three-dimensional cylinder. We focused exclusively on the responses of neurons that showed significant choice-related firing (mean choice probability =0.73) while the monkey viewed perceptually ambiguous stimuli. Application of a wavelet transform to the choice-related firing revealed differences in the frequency band of neuronal activity that depended on whether the previous trial resulted in a correct choice for an unambiguous stimulus that was in the neuron’s preferred direction (low alpha and high beta and gamma) or non-preferred direction (high alpha and low beta and gamma). To probe this in further detail, we applied a regularized linear decoder to predict the choice for an ambiguous trial by referencing the neuronal activity of the preceding unambiguous trial. Neuronal activity on a previous trial provided a significant prediction of the current choice (61% correc, 95%Cl~52%t), even when limiting analysis to preceding trials that were correct and rewarded. These findings provide a potential neuronal signature of sequential choice effects in the primate visual cortex.

Keywords: perception, decision making, attention, decoding, visual system

Procedia PDF Downloads 105
923 Coarse-Grained Computational Fluid Dynamics-Discrete Element Method Modelling of the Multiphase Flow in Hydrocyclones

Authors: Li Ji, Kaiwei Chu, Shibo Kuang, Aibing Yu

Abstract:

Hydrocyclones are widely used to classify particles by size in industries such as mineral processing and chemical processing. The particles to be handled usually have a broad range of size distributions and sometimes density distributions, which has to be properly considered, causing challenges in the modelling of hydrocyclone. The combined approach of Computational Fluid Dynamics (CFD) and Discrete Element Method (DEM) offers convenience to model particle size/density distribution. However, its direct application to hydrocyclones is computationally prohibitive because there are billions of particles involved. In this work, a CFD-DEM model with the concept of the coarse-grained (CG) model is developed to model the solid-fluid flow in a hydrocyclone. The DEM is used to model the motion of discrete particles by applying Newton’s laws of motion. Here, a particle assembly containing a certain number of particles with same properties is treated as one CG particle. The CFD is used to model the liquid flow by numerically solving the local-averaged Navier-Stokes equations facilitated with the Volume of Fluid (VOF) model to capture air-core. The results are analyzed in terms of fluid and solid flow structures, and particle-fluid, particle-particle and particle-wall interaction forces. Furthermore, the calculated separation performance is compared with the measurements. The results obtained from the present study indicate that this approach can offer an alternative way to examine the flow and performance of hydrocyclones

Keywords: computational fluid dynamics, discrete element method, hydrocyclone, multiphase flow

Procedia PDF Downloads 385
922 Distributed Real-Time Range Query Approximation in a Streaming Environment

Authors: Simon Keller, Rainer Mueller

Abstract:

Continuous range queries are a common means to handle mobile clients in high-density areas. Most existing approaches focus on settings in which the range queries for location-based services are more or less static, whereas the mobile clients in the ranges move. We focus on a category called dynamic real-time range queries (DRRQ), assuming that both, clients requested by the query and the inquirers, are mobile. In consequence, the query parameters and the query results continuously change. This leads to two requirements: the ability to deal with an arbitrarily high number of mobile nodes (scalability) and the real-time delivery of range query results. In this paper, we present the highly decentralized solution adaptive quad streaming (AQS) for the requirements of DRRQs. AQS approximates the query results in favor of a controlled real-time delivery and guaranteed scalability. While prior works commonly optimize data structures on the involved servers, we use AQS to focus on a highly distributed cell structure without data structures automatically adapting to changing client distributions. Instead of the commonly used request-response approach, we apply a lightweight streaming method in which no bidirectional communication and no storage or maintenance of queries are required at all.

Keywords: approximation of client distributions, continuous spatial range queries, mobile objects, streaming-based decentralization in spatial mobile environments

Procedia PDF Downloads 120
921 Accentuation Moods of Blaming Utterances in Egyptian Arabic: A Pragmatic Study of Prosodic Focus

Authors: Reda A. H. Mahmoud

Abstract:

This paper investigates the pragmatic meaning of prosodic focus through four accentuation moods of blaming utterances in Egyptian Arabic. Prosodic focus results in various pragmatic meanings when the speaker utters the same blaming expression in different emotional moods: the angry, the mocking, the frustrated, and the informative moods. The main objective of this study is to interpret the meanings of these four accentuation moods in relation to their illocutionary forces and pre-locutionary effects, the integrated features of prosodic focus (e.g., tone movement distributions, pitch accents, lengthening of vowels, deaccentuation of certain syllables/words, and tempo), and the consonance between the former prosodic features and certain lexico-grammatical components to communicate the intentions of the speaker. The data on blaming utterances has been collected via elicitation and pre-recorded material, and the selection of blaming utterances is based on the criteria of lexical and prosodic regularity to be processed and verified by three computer programs, Praat, Speech Analyzer, and Spectrogram Freeware. A dual pragmatic approach is established to interpret expressive blaming utterance and their lexico-grammatical distributions into intonational focus structure units. The pragmatic component of this approach explains the variable psychological attitudes through the expressions of blaming and their effects whereas the analysis of prosodic focus structure is used to describe the intonational contours of blaming utterances and other prosodic features. The study concludes that every accentuation mood has its different prosodic configuration which influences the listener’s interpretation of the pragmatic meanings of blaming utterances.

Keywords: pragmatics, pragmatic interpretation, prosody, prosodic focus

Procedia PDF Downloads 132
920 Stabilization of y-Sterilized Food, Packaging Materials by Synergistic Mixtures of Food-Contact Approval Stabilizers

Authors: Sameh A. S. Thabit Alariqi

Abstract:

Food is widely packaged with plastic materials to prevent microbial contamination and spoilage. Ionizing radiation is widely used to sterilize the food-packaging materials. Sterilization by γ-radiation causes degradation for the plastic packaging materials such as embrittlement, stiffening, softening, discoloration, odour generation, and decrease in molecular weight. Many antioxidants can prevent γ-degradation but most of them are toxic. The migration of antioxidants to its environment gives rise to major concerns in case of food packaging plastics. In this attempt, we have aimed to utilize synergistic mixtures of stabilizers which are approved for food-contact applications. Ethylene-propylene-diene terpolymer (EPDM) have been melt-mixed with hindered amine stabilizers (HAS), phenolic antioxidants and organo-phosphites (hydroperoxide decomposer). Results were discussed by comparing the stabilizing efficiency of mixtures with and without phenol system. Among phenol containing systems where we mostly observed discoloration due to the oxidation of hindered phenol, the combination of secondary HAS, tertiary HAS, organo-phosphite and hindered phenol exhibited improved stabilization efficiency than single or binary additive systems. The mixture of secondary HAS and tertiary HAS, has shown antagonistic effect of stabilization. However, the combination of organo-phosphite with secondary HAS, tertiary HAS and phenol antioxidants have been found to give synergistic even at higher doses of -sterilization. The effects have been explained through the interaction between the stabilizers. After γ-irradiation, the consumption of oligomeric stabilizer significantly depends on the components of stabilization mixture. The effect of the organo-phosphite antioxidant on the overall stability has been discussed.

Keywords: ethylene-propylene-diene terpolymer, synergistic mixtures, gamma sterilization, gamma stabilization

Procedia PDF Downloads 416
919 CFD Simulation of Spacer Effect on Turbulent Mixing Phenomena in Sub Channels of Boiling Nuclear Assemblies

Authors: Shashi Kant Verma, S. L. Sinha, D. K. Chandraker

Abstract:

Numerical simulations of selected subchannel tracer (Potassium Nitrate) based experiments have been performed to study the capabilities of state-of-the-art of Computational Fluid Dynamics (CFD) codes. The Computational Fluid Dynamics (CFD) methodology can be useful for investigating the spacer effect on turbulent mixing to predict turbulent flow behavior such as Dimensionless mixing scalar distributions, radial velocity and vortices in the nuclear fuel assembly. A Gibson and Launder (GL) Reynolds stress model (RSM) has been selected as the primary turbulence model to be applied for the simulation case as it has been previously found reasonably accurate to predict flows inside rod bundles. As a comparison, the case is also simulated using a standard k-ε turbulence model that is widely used in industry. Despite being an isotropic turbulence model, it has also been used in the modeling of flow in rod bundles and to produce lateral velocities after thorough mixing of coolant fairly. Both these models have been solved numerically to find out fully developed isothermal turbulent flow in a 30º segment of a 54-rod bundle. Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to characterize the growth of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. The mixing with water has been numerically studied by means of steady state CFD simulations with the commercial code STAR-CCM+. Flow enters into the computational domain through the mass inflow at the three subchannel faces. Turbulence intensity and hydraulic diameter of 1% and 5.9 mm respectively were used for the inlet. A passive scalar (Potassium nitrate) is injected through the mass fraction of 5.536 PPM at subchannel 2 (Upstream of the mixing section). Flow exited the domain through the pressure outlet boundary (0 Pa), and the reference pressure was 1 atm. Simulation results have been extracted at different locations of the mixing zone and downstream zone. The local mass fraction shows uniform mixing. The effect of the applied turbulence model is nearly negligible just before the outlet plane because the distributions look like almost identical and the flow is fully developed. On the other hand, quantitatively the dimensionless mixing scalar distributions change noticeably, which is visible in the different scale of the colour bars.

Keywords: single-phase flow, turbulent mixing, tracer, sub channel analysis

Procedia PDF Downloads 193
918 Risk Measure from Investment in Finance by Value at Risk

Authors: Mohammed El-Arbi Khalfallah, Mohamed Lakhdar Hadji

Abstract:

Managing and controlling risk is a topic research in the world of finance. Before a risky situation, the stakeholders need to do comparison according to the positions and actions, and financial institutions must take measures of a particular market risk and credit. In this work, we study a model of risk measure in finance: Value at Risk (VaR), which is a new tool for measuring an entity's exposure risk. We explain the concept of value at risk, your average, tail, and describe the three methods for computing: Parametric method, Historical method, and numerical method of Monte Carlo. Finally, we briefly describe advantages and disadvantages of the three methods for computing value at risk.

Keywords: average value at risk, conditional value at risk, tail value at risk, value at risk

Procedia PDF Downloads 418
917 Nondestructive Acoustic Microcharacterisation of Gamma Irradiation Effects on Sodium Oxide Borate Glass X2Na2O-X2B2O3 by Acoustic Signature

Authors: Ibrahim Al-Suraihy, Abdellaziz Doghmane, Zahia Hadjoub

Abstract:

We discuss in this work the elastic properties by using acoustic microscopes to measure Rayleigh and longitudinal wave velocities in a no radiated and radiated sodium borate glasses X2Na2O-X2B2O3 with 0 ≤ x ≤ 27 (mol %) at microscopic resolution. The acoustic material signatures were first measured, from which the characteristic surface velocities were determined.Longitudinal and shear ultrasonic velocities were measured in a different composition of sodium borate glass samples before and after irradiation with γ-rays. Results showed that the effect due to increasing sodium oxide content on the ultrasonic velocity appeared more clearly than due to γ-radiation. It was found that as Na2O composition increases, longitudinal velocities vary from 3832 to 5636 m/s in irradiated sample and it vary from 4010 to 5836 m/s in high radiated sample by 10 dose whereas shear velocities vary from 2223 to 3269 m/s in irradiated sample and it vary from 2326 m/s in low radiation to 3385 m/s in high radiated sample by 10 dose. The effect of increasing sodium oxide content on ultrasonic velocity was very clear. The increase of velocity was attributed to the gradual increase in the rigidity of glass and hence strengthening of network due to gradual change of boron atoms from the three-fold to the four-fold coordination of oxygen atoms. The ultrasonic velocities data of glass samples have been used to find the elastic modulus. It was found that ultrasonic velocity, elastic modulus and microhardness increase with increasing barium oxide content and increasing γ-radiation dose.

Keywords: mechanical properties X2Na2O-X2B2O3, acoustic signature, SAW velocities, additives, gamma-radiation dose

Procedia PDF Downloads 383
916 Logistic Regression Model versus Additive Model for Recurrent Event Data

Authors: Entisar A. Elgmati

Abstract:

Recurrent infant diarrhea is studied using daily data collected in Salvador, Brazil over one year and three months. A logistic regression model is fitted instead of Aalen's additive model using the same covariates that were used in the analysis with the additive model. The model gives reasonably similar results to that using additive regression model. In addition, the problem with the estimated conditional probabilities not being constrained between zero and one in additive model is solved here. Also martingale residuals that have been used to judge the goodness of fit for the additive model are shown to be useful for judging the goodness of fit of the logistic model.

Keywords: additive model, cumulative probabilities, infant diarrhoea, recurrent event

Procedia PDF Downloads 613
915 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 100
914 Evaluation of the Gamma-H2AX Expression as a Biomarker of DNA Damage after X-Ray Radiation in Angiography Patients

Authors: Reza Fardid, Aliyeh Alipour

Abstract:

Introduction: Coronary heart disease (CHD) is the most common and deadliest diseases. A coronary angiography is an important tool for the diagnosis and treatment of this disease. Because angiography is performed by exposure to ionizing radiation, it can lead to harmful effects. Ionizing radiation induces double-stranded breaks in DNA, which is a potentially life-threatening injury. The purpose of the present study is an investigation of the phosphorylation of histone H2AX in the location of the double-stranded break in Peripheral blood lymphocytes as an indication of Biological effects of radiation on angiography patients. Materials and Methods: This method is based on measurement of the phosphorylation of histone (gamma-H2AX, gH2AX) level on serine 139 after formation of DNA double-strand break. 5 cc of blood from 24 patients with angiography were sampled before and after irradiation. Blood lymphocytes were removed, fixed and were stained with specific ϒH2AX antibodies. Finally, ϒH2AX signal as an indicator of the double-strand break was measured with Flow Cytometry Technique. Results and discussion: In all patients, an increase was observed in the number of breaks in double-stranded DNA after irradiation (20.15 ± 14.18) compared to before exposure (1.52 ± 0.34). Also, the mean of DNA double-strand break was showed a linear correlation with DAP. However, although induction of DNA double-strand breaks associated with radiation dose in patients, the effect of individual factors such as radiosensitivity and regenerative capacity should not be ignored. If in future we can measure DNA damage response in every patient angiography and it will be used as a biomarker patient dose, will look very impressive on the public health level. Conclusion: Using flow cytometry readings which are done automatically, it is possible to detect ϒH2AX in the number of blood cells. Therefore, the use of this technique could play a significant role in monitoring patients.

Keywords: coronary angiography, DSB of DNA, ϒH2AX, ionizing radiation

Procedia PDF Downloads 164
913 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow

Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen

Abstract:

Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.

Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics

Procedia PDF Downloads 165
912 On the Quantum Behavior of Nanoparticles: Quantum Theory and Nano-Pharmacology

Authors: Kurudzirayi Robson Musikavanhu

Abstract:

Nanophase particles exhibit quantum behavior by virtue of their small size, being particles of gamma to x-ray wavelength [atomic range]. Such particles exhibit high frequencies, high energy/photon, high penetration power, high ionization power [atomic behavior] and are stable at low energy levels as opposed to bulk phase matter [macro particles] which exhibit higher wavelength [radio wave end] properties, hence lower frequency, lower energy/photon, lower penetration power, lower ionizing power and are less stable at low temperatures. The ‘unique’ behavioral motion of Nano systems will remain a mystery as long as quantum theory remains a mystery, and for pharmacology, pharmacovigilance profiling of Nano systems becomes virtually impossible. Quantum theory is the 4 – 3 – 5 electromagnetic law of life and life motion systems on planet earth. Electromagnetic [wave-particle] properties of all particulate matter changes as mass [bulkiness] changes from one phase to the next [Nano-phase to micro-phase to milli-phase to meter-phase to kilometer phase etc.] and the subsequent electromagnetic effect of one phase particle on bulk matter [different phase] changes from one phase to another. All matter exhibit electromagnetic properties [wave-particle duality] in behavior and the lower the wavelength [and the lesser the bulkiness] the higher the gamma ray end properties exhibited and the higher the wavelength [and the greater the bulkiness], the more the radio-wave end properties are exhibited. Quantum theory is the 4 [moon] – 3[sun] – [earth] 5 law of the Electromagnetic spectrum [solar system]. 4 + 3 = 7; 4 + 3 + 5 = 12; 4 * 3 * 5 = 60; 42 + 32 = 52; 43 + 33 + 53 = 63. Quantum age is overdue.

Keywords: electromagnetic solar system, nano-material, nano pharmacology, pharmacovigilance, quantum theory

Procedia PDF Downloads 419
911 Film Dosimetry – An Asset for Collaboration Between Cancer Radiotherapy Centers at Established Institutions and Those Located in Low- and Middle-Income Countries

Authors: A. Fomujong, P. Mobit, A. Ndlovu, R. Teboh

Abstract:

Purpose: Film’s unique qualities, such as tissue equivalence, high spatial resolution, near energy independence and comparatively less expensive dosimeter, ought to make it the preferred and widely used in radiotherapy centers in low and middle income countries (LMICs). This, however, is not always the case, as other factors that are often maybe taken for granted in advanced radiotherapy centers remain a challenge in LMICs. We explored the unique qualities of film dosimetry that can make it possible for one Institution to benefit from another’s protocols via collaboration. Methods: For simplicity, two Institutions were considered in this work. We used a single batch of films (EBT-XD) and established a calibration protocol, including scan protocols and calibration curves, using the radiotherapy delivery system at Institution A. We then proceeded and performed patient-specific QA for patients treated on system A (PSQA-A-A). Films from the same batch were then sent to a remote center for PSQA on radiotherapy delivery system B. Irradiations were done at Institution B and then returned to Institution A for processing and analysis (PSQA-B-A). The following points were taken into consideration throughout the process (a) A reference film was irradiated to a known dose on the same system irradiating the PSQA film. (b) For calibration, we utilized the one-scan protocol and maintained the same scan orientation of the calibration, PSQA and reference films. Results: Gamma index analysis using a dose threshold of 10% and 3%/2mm criteria showed a gamma passing rate of 99.8% and 100% for the PSQA-A-A and PSQA-B-A, respectively. Conclusion: This work demonstrates that one could use established film dosimetry protocols in one Institution, e.g., an advanced radiotherapy center and apply similar accuracies to irradiations performed at another institution, e.g., a center located in LMIC, which thus encourages collaboration between the two for worldwide patient benefits.

Keywords: collaboration, film dosimetry, LMIC, radiotherapy, calibration

Procedia PDF Downloads 53
910 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 250