Search results for: gamma function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5266

Search results for: gamma function

4486 A Study of the Alumina Distribution in the Lab-Scale Cell during Aluminum Electrolysis

Authors: Olga Tkacheva, Pavel Arkhipov, Alexey Rudenko, Yurii Zaikov

Abstract:

The aluminum electrolysis process in the conventional cryolite-alumina electrolyte with cryolite ratio of 2.7 was carried out at an initial temperature of 970 °C and the anode current density of 0.5 A/cm2 in a 15A lab-scale cell in order to study the formation of the side ledge during electrolysis and the alumina distribution between electrolyte and side ledge. The alumina contained 35.97% α-phase and 64.03% γ-phase with the particles size in the range of 10-120 μm. The cryolite ratio and the alumina concentration were determined in molten electrolyte during electrolysis and in frozen bath after electrolysis. The side ledge in the electrolysis cell was formed only by the 13th hour of electrolysis. With a slight temperature decrease a significant increase in the side ledge thickness was observed. The basic components of the side ledge obtained by the XRD phase analysis were Na3AlF6, Na5Al3F14, Al2O3, and NaF.5CaF2.AlF3. As in the industrial cell, the increased alumina concentration in the side ledge formed on the cell walls and at the ledge-electrolyte-aluminum three-phase boundary during aluminum electrolysis in the lab cell was found (FTP No 05.604.21.0239, IN RFMEFI60419X0239).

Keywords: alumina distribution, aluminum electrolyzer, cryolie-alumina electrolyte, side ledge

Procedia PDF Downloads 263
4485 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India

Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn

Abstract:

Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.

Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter

Procedia PDF Downloads 163
4484 Application of Generalized Autoregressive Score Model to Stock Returns

Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke

Abstract:

The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.

Keywords: generalized autoregressive score model, South Africa, stock returns, time-varying

Procedia PDF Downloads 488
4483 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 150
4482 Approximation of Convex Set by Compactly Semidefinite Representable Set

Authors: Anusuya Ghosh, Vishnu Narayanan

Abstract:

The approximation of convex set by semidefinite representable set plays an important role in semidefinite programming, especially in modern convex optimization. To optimize a linear function over a convex set is a hard problem. But optimizing the linear function over the semidefinite representable set which approximates the convex set is easy to solve as there exists numerous efficient algorithms to solve semidefinite programming problems. So, our approximation technique is significant in optimization. We develop a technique to approximate any closed convex set, say K by compactly semidefinite representable set. Further we prove that there exists a sequence of compactly semidefinite representable sets which give tighter approximation of the closed convex set, K gradually. We discuss about the convergence of the sequence of compactly semidefinite representable sets to closed convex set K. The recession cone of K and the recession cone of the compactly semidefinite representable set are equal. So, we say that the sequence of compactly semidefinite representable sets converge strongly to the closed convex set. Thus, this approximation technique is very useful development in semidefinite programming.

Keywords: semidefinite programming, semidefinite representable set, compactly semidefinite representable set, approximation

Procedia PDF Downloads 370
4481 Long Non-Coding RNAs Mediated Regulation of Diabetes in Humanized Mouse

Authors: Md. M. Hossain, Regan Roat, Jenica Christopherson, Colette Free, Zhiguang Guo

Abstract:

Long noncoding RNA (lncRNA) mediated post-transcriptional gene regulation, and their epigenetic landscapes have been shown to be involved in many human diseases. However, their regulation in diabetes through governing islet’s β-cell function and survival needs to be elucidated. Due to the technical and ethical constraints, it is difficult to study their role in β-cell function and survival in human under in vivo condition. In this study, humanized mice have been developed through transplanting human pancreatic islet under the kidney capsule of NOD.SCID mice and induced β-cell death leading to diabetes condition to study lncRNA mediated regulation. For this, human islets from 3 donors (3000 IEQ, purity > 80%) were transplanted under the kidney capsule of STZ induced diabetic NOD.scid mice. After at least 2 weeks of normoglycecemia, lymphocytes from diabetic NOD mice were adoptively transferred and islet grafts were collected once blood glucose reached > 200 mg/dl. RNA from human donor islets, islet grafts from humanized mice with either adoptive lymphocyte transfer (ALT) or PBS control (CTL) were ribodepleted; barcoded fragment libraries were constructed and sequenced on the Ion Proton sequencer. lncRNA expression in isolated human islets, islet grafts from humanized mice with and without induced β-cell death and their regulation in human islets function in vitro under glucose challenge, cytokine mediated inflammation and induced apoptotic condition were investigated. Out of 3155 detected lncRNAs, 299 that highly expressed in islets were found to be significantly downregulated and 224 upregulated in ALT compared to CTL. Most of these are found to be collocated within 5 kb upstream and 1 kb downstream of 788 up- and 624 down-regulated mRNAs. Genomic Regions Enrichment of Annotations Analysis revealed deregulated and collocated genes are related to pancreas endocrine development; insulin synthesis, processing, and secretion; pancreatitis and diabetes. Many of them, that found to be located within enhancer domains for islet specific gene activity, are associated to the deregulation of known islet/βcell specific transcription factors and genes that are important for β-cell differentiation, identity, and function. RNA sequencing analysis revealed aberrant lncRNA expression which is associated to the deregulated mRNAs in β-cell function as well as in molecular pathways related to diabetes. A distinct set of candidate lncRNA isoforms were identified as highly enriched and specific to human islets, which are deregulated in human islets from donors with different BMIs and with type 2 diabetes. These RNAs show an interesting regulation in cultured human islets under glucose stimulation and with induced β-cell death by cytokines. Aberrant expression of these lncRNAs was detected in the exosomes from the media of islets cultured with cytokines. Results of this study suggest that the islet specific lncRNAs are deregulated in human islet with β-cell death, hence important in diabetes. These lncRNAs might be important for human β-cell function and survival thus could be used as biomarkers and novel therapeutic targets for diabetes.

Keywords: β-cell, humanized mouse, pancreatic islet, LncRNAs

Procedia PDF Downloads 152
4480 Interaction between Trapezoidal Hill and Subsurface Cavity under SH Wave Incidence

Authors: Yuanrui Xu, Zailin Yang, Yunqiu Song, Guanxixi Jiang

Abstract:

It is an important subject of seismology on the influence of local topography on ground motion during earthquake. In mountainous areas with complex terrain, the construction of the tunnel is often the most effective transportation scheme. In these projects, the local terrain can be simplified into hills with different shapes, and the underground tunnel structure can be regarded as a subsurface cavity. The presence of the subsurface cavity affects the strength of the rock mass and changes the deformation and failure characteristics. Moreover, the scattering of the elastic waves by underground structures usually interacts with local terrains, which leads to a significant influence on the surface displacement of the terrains. Therefore, it is of great practical significance to study the surface displacement of local terrains with underground tunnels in earthquake engineering and seismology. In this work, the region is divided into three regions by the method of region matching. By using the fractional Bessel function and Hankel function, the complex function method, and the wave function expansion method, the wavefield expression of SH waves is introduced. With the help of a constitutive relation between the displacement and the stress components, the hoop stress and radial stress is obtained subsequently. Then, utilizing the continuous condition at different region boundaries, the undetermined coefficients in wave fields are solved by the Fourier series expansion and truncation of the finite term. Finally, the validity of the method is verified, and the surface displacement amplitude is calculated. The surface displacement amplitude curve is discussed in the numerical results. The results show that different parameters, such as radius and buried depth of the tunnel, wave number, and incident angle of the SH wave, have a significant influence on the amplitude of surface displacement. For the underground tunnel, the increase of buried depth will make the response of surface displacement amplitude increases at first and then decreases. However, the increase of radius leads the response of surface displacement amplitude to appear an opposite phenomenon. The increase of SH wave number can enlarge the amplitude of surface displacement, and the change of incident angle can obviously affect the amplitude fluctuation.

Keywords: method of region matching, scattering of SH wave, subsurface cavity, trapezoidal hill

Procedia PDF Downloads 123
4479 Study on Constitutive Model of Particle Filling Material Considering Volume Expansion

Authors: Xu Jinsheng, Tong Xin, Zheng Jian, Zhou Changsheng

Abstract:

The NEPE (nitrate ester plasticized polyether) propellant is a kind of particle filling material with relatively high filling fraction. The experimental results show that the microcracks, microvoids and dewetting can cause the stress softening of the material. In this paper, a series of mechanical testing in inclusion with CCD technique were conducted to analyze the evolution of internal defects of propellant. The volume expansion function of the particle filling material was established by measuring of longitudinal and transverse strain with optical deformation measurement system. By analyzing the defects and internal damages of the material, a visco-hyperelastic constitutive model based on free energy theory was proposed incorporating damage function. The proposed constitutive model could accurately predict the mechanical properties of uniaxial tensile tests and tensile-relaxation tests.

Keywords: dewetting, constitutive model, uniaxial tensile tests, visco-hyperelastic, nonlinear

Procedia PDF Downloads 284
4478 Utility Assessment Model for Wireless Technology in Construction

Authors: Yassir AbdelRazig, Amine Ghanem

Abstract:

Construction projects are information intensive in nature and involve many activities that are related to each other. Wireless technologies can be used to improve the accuracy and timeliness of data collected from construction sites and shares it with appropriate parties. Nonetheless, the construction industry tends to be conservative and shows hesitation to adopt new technologies. A main concern for owners, contractors or any person in charge on a job site is the cost of the technology in question. Wireless technologies are not cheap. There are a lot of expenses to be taken into consideration, and a study should be completed to make sure that the importance and savings resulting from the usage of this technology is worth the expenses. This research attempts to assess the effectiveness of using the appropriate wireless technologies based on criteria such as performance, reliability, and risk. The assessment is based on a utility function model that breaks down the selection issue into alternatives attribute. Then the attributes are assigned weights and single attributes are measured. Finally, single attribute are combined to develop one single aggregate utility index for each alternative.

Keywords: analytic hierarchy process, decision theory, utility function, wireless technologies

Procedia PDF Downloads 327
4477 Application of the Concept of Comonotonicity in Option Pricing

Authors: A. Chateauneuf, M. Mostoufi, D. Vyncke

Abstract:

Monte Carlo (MC) simulation is a technique that provides approximate solutions to a broad range of mathematical problems. A drawback of the method is its high computational cost, especially in a high-dimensional setting, such as estimating the Tail Value-at-Risk for large portfolios or pricing basket options and Asian options. For these types of problems, one can construct an upper bound in the convex order by replacing the copula by the comonotonic copula. This comonotonic upper bound can be computed very quickly, but it gives only a rough approximation. In this paper we introduce the Comonotonic Monte Carlo (CoMC) simulation, by using the comonotonic approximation as a control variate. The CoMC is of broad applicability and numerical results show a remarkable speed improvement. We illustrate the method for estimating Tail Value-at-Risk and pricing basket options and Asian options when the logreturns follow a Black-Scholes model or a variance gamma model.

Keywords: control variate Monte Carlo, comonotonicity, option pricing, scientific computing

Procedia PDF Downloads 502
4476 Climate Changes Impact on Artificial Wetlands

Authors: Carla Idely Palencia-Aguilar

Abstract:

Artificial wetlands play an important role at Guasca Municipality in Colombia, not only because they are used for the agroindustry, but also because more than 45 species were found, some of which are endemic and migratory birds. Remote sensing was used to determine the changes in the area occupied by water of artificial wetlands by means of Aster and Modis images for different time periods. Evapotranspiration was also determined by three methods: Surface Energy Balance System-Su (SEBS) algorithm, Surface Energy Balance- Bastiaanssen (SEBAL) algorithm, and Potential Evapotranspiration- FAO. Empirical equations were also developed to determine the relationship between Normalized Difference Vegetation Index (NDVI) versus net radiation, ambient temperature and rain with an obtained R2 of 0.83. Groundwater level fluctuations on a daily basis were studied as well. Data from a piezometer placed next to the wetland were fitted with rain changes (with two weather stations located at the proximities of the wetlands) by means of multiple regression and time series analysis, the R2 from the calculated and measured values resulted was higher than 0.98. Information from nearby weather stations provided information for ordinary kriging as well as the results for the Digital Elevation Model (DEM) developed by using PCI software. Standard models (exponential, spherical, circular, gaussian, linear) to describe spatial variation were tested. Ordinary Cokriging between height and rain variables were also tested, to determine if the accuracy of the interpolation would increase. The results showed no significant differences giving the fact that the mean result of the spherical function for the rain samples after ordinary kriging was 58.06 and a standard deviation of 18.06. The cokriging using for the variable rain, a spherical function; for height variable, the power function and for the cross variable (rain and height), the spherical function had a mean of 57.58 and a standard deviation of 18.36. Threatens of eutrophication were also studied, given the unconsciousness of neighbours and government deficiency. Water quality was determined over the years; different parameters were studied to determine the chemical characteristics of water. In addition, 600 pesticides were studied by gas and liquid chromatography. Results showed that coliforms, nitrogen, phosphorous and prochloraz were the most significant contaminants.

Keywords: DEM, evapotranspiration, geostatistics, NDVI

Procedia PDF Downloads 109
4475 Comparison of the Logistic and the Gompertz Growth Functions Considering a Periodic Perturbation in the Model Parameters

Authors: Avan Al-Saffar, Eun-Jin Kim

Abstract:

Both the logistic growth model and the gompertz growth model are used to describe growth processes. Both models driven by perturbations in different cases are investigated using information theory as a useful measure of sustainability and the variability. Specifically, we study the effect of different oscillatory modulations in the system's parameters on the evolution of the system and Probability Density Function (PDF). We show the maintenance of the initial conditions for a long time. We offer Fisher information analysis in positive and/or negative feedback and explain its implications for the sustainability of population dynamics. We also display a finite amplitude solution due to the purely fluctuating growth rate whereas the periodic fluctuations in negative feedback can lead to break down the system's self-regulation with an exponentially growing solution. In the cases tested, the gompertz and logistic systems show similar behaviour in terms of information and sustainability although they develop differently in time.

Keywords: dynamical systems, fisher information, probability density function (pdf), sustainability

Procedia PDF Downloads 423
4474 The Application of the Analytic Basis Function Expansion Triangular-z Nodal Method for Neutron Diffusion Calculation

Authors: Kunpeng Wang, Hongchun, Wu, Liangzhi Cao, Chuanqi Zhao

Abstract:

The distributions of homogeneous neutron flux within a node were expanded into a set of analytic basis functions which satisfy the diffusion equation at any point in a triangular-z node for each energy group, and nodes were coupled with each other with both the zero- and first-order partial neutron current moments across all the interfaces of the triangular prism at the same time. Based this method, a code TABFEN has been developed and applied to solve the neutron diffusion equation in a complicated geometry. In addition, after a series of numerical derivation, one can get the neutron adjoint diffusion equations in matrix form which is the same with the neutron diffusion equation; therefore, it can be solved by TABFEN, and the low-high scan strategy is adopted to improve the efficiency. Four benchmark problems are tested by this method to verify its feasibility, the results show good agreement with the references which demonstrates the efficiency and feasibility of this method.

Keywords: analytic basis function expansion method, arbitrary triangular-z node, adjoint neutron flux, complicated geometry

Procedia PDF Downloads 434
4473 Sub-Chronic Exposure to Dexamethasone Impairs Cognitive Function and Insulin in Prefrontal Cortex of Male Wistar Rats

Authors: A. Alli-Oluwafuyi, A. Amin, S. M. Fii, S. O. Amusa, A. Imam, N. T. Asogwa, W. I. Abdulmajeed, F. Olaseinde, B. V. Owoyele

Abstract:

Chronic stress or prolonged glucocorticoid administration impairs higher cognitive functions in rodents and humans. However, the mechanisms are not fully clear. Insulin and receptors are expressed in the brain and are involved in cognition. Insulin resistance accompanies Alzheimer’s disease and associated cognitive decline. The goal of this study was to evaluate the effects of sub-chronic administration of a glucocorticoid, dexamethasone (DEX) on behavior and biochemical changes in prefrontal cortex (PFC). Male Wistar rats were administered DEX (2, 4 & 8 mg/kg, IP) or saline for seven consecutive days and behavior was assessed in the following paradigms: “Y” maze, elevated plus maze, Morris’ water maze and novel object recognition (NOR) tests. Insulin, lactate dehydrogenase (LDH) and Superoxide Dismutase (SOD) activity were evaluated in homogenates of the prefrontal cortex. DEX-treated rats exhibited impaired prefrontal cortex function manifesting as reduced locomotion, impaired novel object exploration and impaired short- and long-term spatial memory compared to normal controls (p < 0.05). These effects were not consistently dose-dependent. These behavioral alterations were accompanied by a decrease in insulin concentration observed in PFC of 4 mg/kg DEX-treated rats compared to control (10μIU/mg vs. 50μIU/mg; p < 0.05) but not 2mg/kg. Furthermore, we report a modification of brain stress markers LDH and SOD (p > 0.05). These results indicate that prolonged activation of GCs disrupt prefrontal cortex function which may be related to insulin impairment. These effects may not be attributable to a non-specific elevation of oxidative stress in the brain. Future studies would evaluate mechanisms of GR-induced insulin loss.

Keywords: dexamethasone, insulin, memory, prefrontal cortex

Procedia PDF Downloads 268
4472 Establishment of the Regression Uncertainty of the Critical Heat Flux Power Correlation for an Advanced Fuel Bundle

Authors: L. Q. Yuan, J. Yang, A. Siddiqui

Abstract:

A new regression uncertainty analysis methodology was applied to determine the uncertainties of the critical heat flux (CHF) power correlation for an advanced 43-element bundle design, which was developed by Canadian Nuclear Laboratories (CNL) to achieve improved economics, resource utilization and energy sustainability. The new methodology is considered more appropriate than the traditional methodology in the assessment of the experimental uncertainty associated with regressions. The methodology was first assessed using both the Monte Carlo Method (MCM) and the Taylor Series Method (TSM) for a simple linear regression model, and then extended successfully to a non-linear CHF power regression model (CHF power as a function of inlet temperature, outlet pressure and mass flow rate). The regression uncertainty assessed by MCM agrees well with that by TSM. An equation to evaluate the CHF power regression uncertainty was developed and expressed as a function of independent variables that determine the CHF power.

Keywords: CHF experiment, CHF correlation, regression uncertainty, Monte Carlo Method, Taylor Series Method

Procedia PDF Downloads 401
4471 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 79
4470 Dynamics and Advection in a Vortex Parquet on the Plane

Authors: Filimonova Alexanra

Abstract:

Inviscid incompressible fluid flows are considered. The object of the study is a vortex parquet – a structure consisting of distributed vortex spots of different directions, occupying the entire plane. The main attention is paid to the study of advection processes of passive particles in the corresponding velocity field. The dynamics of the vortex structures is considered in a rectangular region under the assumption that periodic boundary conditions are imposed on the stream function. Numerical algorithms are based on the solution of the initial-boundary value problem for nonstationary Euler equations in terms of vorticity and stream function. For this, the spectral-vortex meshless method is used. It is based on the approximation of the stream function by the Fourier series cut and the approximation of the vorticity field by the least-squares method from its values in marker particles. A vortex configuration, consisting of four vortex patches is investigated. Results of a numerical study of the dynamics and interaction of the structure are presented. The influence of the patch radius and the relative position of positively and negatively directed patches on the processes of interaction and mixing is studied. The obtained results correspond to the following possible scenarios: the initial configuration does not change over time; the initial configuration forms a new structure, which is maintained for longer times; the initial configuration returns to its initial state after a certain period of time. The processes of mass transfer of vorticity by liquid particles on a plane were calculated and analyzed. The results of a numerical analysis of the particles dynamics and trajectories on the entire plane and the field of local Lyapunov exponents are presented.

Keywords: ideal fluid, meshless methods, vortex structures in liquids, vortex parquet.

Procedia PDF Downloads 55
4469 Fragility Analysis of Weir Structure Subjected to Flooding Water Damage

Authors: Oh Hyeon Jeon, WooYoung Jung

Abstract:

In this study, seepage analysis was performed by the level difference between upstream and downstream of weir structure for safety evaluation of weir structure against flooding. Monte Carlo Simulation method was employed by considering the probability distribution of the adjacent ground parameter, i.e., permeability coefficient of weir structure. Moreover, by using a commercially available finite element program (ABAQUS), modeling of the weir structure is carried out. Based on this model, the characteristic of water seepage during flooding was determined at each water level with consideration of the uncertainty of their corresponding permeability coefficient. Subsequently, fragility function could be constructed based on this response from numerical analysis; this fragility function results could be used to determine the weakness of weir structure subjected to flooding disaster. They can also be used as a reference data that can comprehensively predict the probability of failur,e and the degree of damage of a weir structure.

Keywords: weir structure, seepage, flood disaster fragility, probabilistic risk assessment, Monte-Carlo simulation, permeability coefficient

Procedia PDF Downloads 336
4468 Estimating the Effect of Fluid in Pressing Process

Authors: A. Movaghar, R. A. Mahdavinejad

Abstract:

To analyze the effect of various parameters of fluid on the material properties such as surface and depth defects and/or cracks, it is possible to determine the affection of pressure field on these specifications. Stress tensor analysis is also able to determine the points in which the probability of defection creation is more. Besides, from pressure field, it is possible to analyze the affection of various fluid specifications such as viscosity and density on defect created in the material. In this research, the concerned boundary conditions are analyzed first. Then the solution network and stencil used are mentioned. With the determination of relevant equation on the fluid flow between notch and matrix and their discretion according to the governed boundary conditions, these equations can be solved. Finally, with the variation creations on fluid parameters such as density and viscosity, the affection of these variations can be determined on pressure field. In this direction, the flowchart and solution algorithm with their results as vortex and current function contours for two conditions with most applications in pressing process are introduced and discussed.

Keywords: pressing, notch, matrix, flow function, vortex

Procedia PDF Downloads 277
4467 Stability and Performance Improvement of a Two-Degree-of-Freedom Robot under Interaction Using the Impedance Control

Authors: Seyed Reza Mirdehghan, Mohammad Reza Haeri Yazdi

Abstract:

In this paper, the stability and the performance of a two-degree-of-freedom robot under an interaction with a unknown environment has been investigated. The time when the robot returns to its initial position after an interaction and the primary resistance of the robot against the impact must be reduced. Thus, the applied torque on the motor will be reduced. The impedance control is an appropriate method for robot control in these conditions. The stability of the robot at interaction moment was transformed to be a robust stability problem. The dynamic of the unknown environment was modeled as a weight function and the stability of the robot under an interaction with the environment has been investigated using the robust control concept. To improve the performance of the system, a force controller has been designed which the normalized impedance after interaction has been reduced. The resistance of the robot has been considered as a normalized cost function and its value was 0.593. The results has showed reduction of resistance of the robot against impact and the reduction of convergence time by lower than one second.

Keywords: impedance control, control system, robots, interaction

Procedia PDF Downloads 414
4466 Natural Radioactivity in Foods Consumed in Turkey

Authors: E. Kam, G. Karahan, H. Aslıyuksek, A. Bozkurt

Abstract:

This study aims to determine the natural radioactivity levels in some foodstuffs produced in Turkey. For this purpose, 48 different foods samples were collected from different land parcels throughout the country. All samples were analyzed to designate both gross alpha and gross beta radioactivities and the radionuclides’ concentrations. The gross alpha radioactivities were measured as below 1 Bq kg-1 in most of the samples, some of them being due to the detection limit of the counting system. The gross beta radioactivity levels ranged from 1.8 Bq kg-1 to 453 Bq kg-1, larger levels being observed in leguminous seeds while the highest level being in haricot bean. The concentrations of natural radionuclides in the foodstuffs were investigated by the method of gamma spectroscopy. High levels of 40K were measured in all the samples, the highest activities being again in leguminous seeds. Low concentrations of 238U and 226Ra were found in some of the samples, which are comparable to the reported results in the literature. Based on the activity concentrations obtained in this study, average annual effective dose equivalents for the radionuclides 226Ra, 238U, and 40K were calculated as 77.416 µSv y-1, 0.978 µSv y-1, and 140.55 µSv y-1, respectively.

Keywords: foods, radioactivity, gross alpha, gross beta, annual equivalent dose, Turkey

Procedia PDF Downloads 440
4465 Elite Child Athletes Are Our Future: Cardiac Adaptation to Monofin Training in Prepubertal Egyptian Athletes

Authors: Magdy Abouzeid, Nancy Abouzeid, Afaf Salem

Abstract:

Background: The elite child athletes are one who has superior athletic talent. Monofin (a single surface swim fin) swimming already proved to be the most efficient method of swimming for human being. This is a novel descriptive study examining myocardial function indices in prepubertal monofin children. The aim of the present study was to determine the influence of long-term monofin training (LTMT), 36 weeks, 6 times per week, 90 min per unit on Myocardial function adaptation in elite child monofin athletes. Methods: 14 elite monofin children aged 11.95 years (± 1.09 yr) took part for (LTMT). All subjects underwent two-dimension, M-mode, and Doppler echocardiography before and after training to evaluate cardiac dimensions and function; septal and posterior wall thickness. Statistical methods of SPSS, means ± SD and paired t test, % of improvement were used. Findings: There was significant difference (p<0.01) and % improvement for all echocardiography parameter after (LTMT). Inter ventricular septal thickness in diastole and in systole increased by 27.9 % and 42.75 %. Left ventricular end systolic dimension and diastole increased by 16.81 % and 42.7 % respectively. Posterior wall thickness in systole very highly increased by 283.3 % and in diastole increased by 51.78 %. Left ventricular mass in diastole and in systole increased by 44.8 % and 40.1 % respectively. Stroke volume (SV) and resting heart rate (HR) significant changed (sv) 25 %, (HR) 14.7 %. Interpretation: the unique swim fin tool and create propulsion and overcome resistance. Further researches are needed to determine the effects of monofin training on right ventricular in child athletes.

Keywords: prepubertal, monofin training, heart athlete's, elite child athlete, echocardiography

Procedia PDF Downloads 327
4464 Recursive Doubly Complementary Filter Design Using Particle Swarm Optimization

Authors: Ju-Hong Lee, Ding-Chen Chung

Abstract:

This paper deals with the optimal design of recursive doubly complementary (DC) digital filter design using a metaheuristic based optimization technique. Based on the theory of DC digital filters using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the phase response errors of the designed DAFs. To deal with the stability of the recursive DC filters during the design process, we can either impose some necessary constraints on the phases of the recursive DAFs. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a population based stochastic optimization approach. The resulting DC digital filters can possess satisfactory frequency response. Simulation results are presented for illustration and comparison.

Keywords: doubly complementary, digital all-pass filter, weighted least squares algorithm, particle swarm optimization

Procedia PDF Downloads 670
4463 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data

Authors: Wanhyun Cho, Soonja Kang, Sanggoon Kim, Soonyoung Park

Abstract:

We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.

Keywords: multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, importance sampling, approximate posterior distribution, marginal likelihood evidence

Procedia PDF Downloads 427
4462 Spatial Planning of Community Green Infrastructure Based on Public Health Considerations: A Case Study of Kunhou Community

Authors: Shengdan Yang

Abstract:

The outbreak of the COVID-19 pandemic in early 2020 has made public health issues to be re-examined. The value of green space configuration is an important measure of community health quality. By combining quantitative and qualitative methods, the structure and function of community green space can be better evaluated. This study selects Wuhan Kunhou Community as the site and proposes to analyze the daily health service function of the community's green infrastructure. Through GIS-based spatial analysis, case study, and field investigation, this study evaluates the accessibility of green infrastructure and discusses the ideal green space form based on health indicators. The findings show that Kunhou Community lacks access to green infrastructure and public space for daily activities. The research findings provide a bridge between public health indicators and community space planning and propose design suggestions for green infrastructure planning.

Keywords: accessibility, community health, GIS, green infrastructure

Procedia PDF Downloads 94
4461 Base Change for Fisher Metrics: Case of the q-Gaussian Inverse Distribution

Authors: Gabriel I. Loaiza Ossa, Carlos A. Cadavid Moreno, Juan C. Arango Parra

Abstract:

It is known that the Riemannian manifold determined by the family of inverse Gaussian distributions endowed with the Fisher metric has negative constant curvature κ= -1/2, as does the family of usual Gaussian distributions. In the present paper, firstly, we arrive at this result by following a different path, much simpler than the previous ones. We first put the family in exponential form, thus endowing the family with a new set of parameters, or coordinates, θ₁, θ₂; then we determine the matrix of the Fisher metric in terms of these parameters; and finally we compute this matrix in the original parameters. Secondly, we define the inverse q-Gaussian distribution family (q < 3) as the family obtained by replacing the usual exponential function with the Tsallis q-exponential function in the expression for the inverse Gaussian distribution and observe that it supports two possible geometries, the Fisher and the q-Fisher geometry. And finally, we apply our strategy to obtain results about the Fisher and q-Fisher geometry of the inverse q-Gaussian distribution family, similar to the ones obtained in the case of the inverse Gaussian distribution family.

Keywords: base of changes, information geometry, inverse Gaussian distribution, inverse q-Gaussian distribution, statistical manifolds

Procedia PDF Downloads 233
4460 A Study on the Iterative Scheme for Stratified Shields Gamma Ray Buildup Factors Using Layer-Splitting Technique in Double-Layer Shields

Authors: Sari F. Alkhatib, Chang Je Park, Gyuhong Roh

Abstract:

The iterative scheme which is used to treat buildup factors for stratified shields is being investigated here using the layer-splitting technique. A simple suggested formalism for the scheme based on the Kalos’ formula is introduced, based on which the implementation of the testing technique is carried out. The second layer in a double-layer shield was split into two equivalent layers and the scheme (with the suggested formalism) was implemented on the new “three-layer” shield configuration. The results of such manipulation on water-lead and water-iron shields combinations are presented here for 1 MeV photons. It was found that splitting the second layer introduces some deviation on the overall buildup factor value. This expected deviation appeared to be higher in the case of low Z layer followed by high Z. However, the overall performance of the iterative scheme showed a great consistency and strong coherence even with the introduced changes. The introduced layer-splitting testing technique shows the capability to be implemented in test the iterative scheme with a wide range of formalisms.

Keywords: buildup factor, iterative scheme, stratified shields, layer-splitting tecnique

Procedia PDF Downloads 404
4459 Assessing the Effectiveness of Machine Learning Algorithms for Cyber Threat Intelligence Discovery from the Darknet

Authors: Azene Zenebe

Abstract:

Deep learning is a subset of machine learning which incorporates techniques for the construction of artificial neural networks and found to be useful for modeling complex problems with large dataset. Deep learning requires a very high power computational and longer time for training. By aggregating computing power, high performance computer (HPC) has emerged as an approach to resolving advanced problems and performing data-driven research activities. Cyber threat intelligence (CIT) is actionable information or insight an organization or individual uses to understand the threats that have, will, or are currently targeting the organization. Results of review of literature will be presented along with results of experimental study that compares the performance of tree-based and function-base machine learning including deep learning algorithms using secondary dataset collected from darknet.

Keywords: deep-learning, cyber security, cyber threat modeling, tree-based machine learning, function-based machine learning, data science

Procedia PDF Downloads 139
4458 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 75
4457 Manipulating The PAAR Proteins of Acinetobacter Baumannii

Authors: Irene Alevizos, Jessica Lewis, Marina Harper, John Boyce

Abstract:

Acinetobacter baumannii causes a range of severe nosocomial-acquired infections, and many strains are multi-drug resistant. A. baumannii possesses survival mechanisms allowing it to thrive in competitive polymicrobial environments, including a Type VI Secretion System (T6SS) that injects effector proteins into other bacteria to give a competitive advantage. The effects of T6SS firing are broad and depend entirely on the effector that is delivered. Effects can include toxicity against prokaryotic or eukaryotic cells and the acquisition of essential nutrients. The T6SS of some species can deliver ‘specialised effectors’ that are fused directly to T6SS components, such as PAAR proteins. PAAR proteins are predicted to form the piercing tip of the T6SS and are essential for T6SS function. Although no specialised effectors have been identified in A. baumannii, many strains encode multiple PAAR proteins. Analysis of PAAR proteins across the species identified 12 families of PAAR proteins with distinct C-terminal extensions. A. baumannii AB307-0294 encodes two PAAR proteins, one of which has a C-terminal extension. Mutation of one or both of the PAAR-encoding genes in this strain showed that expression of either PAAR protein was sufficient for T6SS function. We employed a heterologous expression approach and determined that PAAR proteins from different A. baumannii strains, as well as the closely related A. baylyi species, could complement the A. baumannii ∆paar mutant and restore T6SS function. Furthermore, we showed that PAAR fusions could be used to deliver artificially cloned protein fragments by generating Histidine- and Streptavidin- tagged PAAR specialised effectors, which restored T6SS activity. This provides evidence that the fusion of protein fragments onto PAAR proteins in A. baumannii is compatible with a functional T6SS. Successful delivery by this mechanism extends the scope of what the T6SS can deliver, including user designed proteins.

Keywords: A. baumannii, effectors, PAAR, T6SS

Procedia PDF Downloads 83