Search results for: queue size distribution at a random epoch
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11461

Search results for: queue size distribution at a random epoch

11251 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 383
11250 Morphological Characterization and Gas Permeation of Commercially Available Alumina Membrane

Authors: Ifeyinwa Orakwe, Ngozi Nwogu, Edward Gobina

Abstract:

This work presents experimental results relating to the structural characterization of a commercially available alumina membrane. A γ-alumina mesoporous tubular membrane has been used. Nitrogen adsorption-desorption, scanning electron microscopy and gas permeability test has been carried out on the alumina membrane to characterize its structural features. Scanning electron microscopy (SEM) was used to determine the pore size distribution of the membrane. Pore size, specific surface area and pore size distribution were also determined with the use of the Nitrogen adsorption-desorption instrument. Gas permeation tests were carried out on the membrane using a variety of single and mixed gases. The permeabilities at different pressure between 0.05-1 bar and temperature range of 25-200oC were used for the single and mixed gases: nitrogen (N2), helium (He), oxygen (O2), carbon dioxide (CO2), 14%CO₂/N₂, 60%CO₂/N₂, 30%CO₂/CH4 and 21%O₂/N₂. Plots of flow rate verses pressure were obtained. Results got showed the effect of temperature on the permeation rate of the various gases. At 0.5 bar for example, the flow rate for N2 was relatively constant before decreasing with an increase in temperature, while for O2, it continuously decreased with an increase in temperature. In the case of 30%CO₂/CH4 and 14%CO₂/N₂, the flow rate showed an increase then a decrease with increase in temperature. The effect of temperature on the membrane performance of the various gases is presented and the influence of the trans membrane pressure drop will be discussed in this paper.

Keywords: alumina membrane, Nitrogen adsorption-desorption, scanning electron microscopy, gas permeation, temperature

Procedia PDF Downloads 297
11249 Antibacterial Activity and Cytotoxicity of Silver Nanoparticles Synthesized by Moringa oleifera Extract as Reducing Agent

Authors: Temsiri Suwan, Penpicha Wanachantararak, Sakornrat Khongkhunthian, Siriporn Okonogi

Abstract:

In the present study, silver nanoparticles (AgNPs) were synthesized by green synthesis approach using Moringa oleifera aqueous extract (ME) as a reducing agent and silver nitrate as a precursor. The obtained AgNPs were characterized using UV-Vis spectroscopy (UV-Vis), dynamic light scattering (DLS), scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDX), and X-ray diffractometry (XRD). The results from UV-Vis revealed that the maximum absorption of AgNPs was at 430 nm and the EDX spectrum confirmed Ag element. The results from DLS indicated that the amount of ME played an important role in particle size, size distribution, and zeta potential of the obtained AgNPs. The smallest size (62.4 ± 1.8 nm) with narrow distribution (0.18 ± 0.02) of AgNPs was obtained after using 1% w/v of ME. This system gave high negative zeta potential of -36.5 ± 2.8 mV. SEM results indicated that the obtained AgNPs were spherical in shape. Antibacterial activity using dilution method revealed that the minimum inhibitory and minimum bactericidal concentrations of the obtained AgNPs against Streptococcus mutans were 0.025 and 0.1 mg/mL, respectively. Cytotoxicity test of AgNPs on adenocarcinomic human alveolar basal epithelial cells (A549) indicated that the particles impacted against A549 cells. The percentage of cell growth inhibition was 87.5 ± 3.6 % when only 0.1 mg/mL AgNPs was used. These results suggest that ME is the potential reducing agent for green synthesis of AgNPs.

Keywords: antibacterial activity, Moringa oleifera extract, reducing agent, silver nanoparticles

Procedia PDF Downloads 91
11248 Size Reduction of Images Using Constraint Optimization Approach for Machine Communications

Authors: Chee Sun Won

Abstract:

This paper presents the size reduction of images for machine-to-machine communications. Here, the salient image regions to be preserved include the image patches of the key-points such as corners and blobs. Based on a saliency image map from the key-points and their image patches, an axis-aligned grid-size optimization is proposed for the reduction of image size. To increase the size-reduction efficiency the aspect ratio constraint is relaxed in the constraint optimization framework. The proposed method yields higher matching accuracy after the size reduction than the conventional content-aware image size-reduction methods.

Keywords: image compression, image matching, key-point detection and description, machine-to-machine communication

Procedia PDF Downloads 392
11247 Random Variation of Treated Volumes in Fractionated 2D Image Based HDR Brachytherapy for Cervical Cancer

Authors: R. Tudugala, B. M. A. I. Balasooriya, W. M. Ediri Arachchi, R. W. M. W. K. Rathnayake, T. D. Premaratna

Abstract:

Brachytherapy involves placing a source of radiation near the cancer site which gives promising prognosis for cervical cancer treatments. The purpose of this study was to evaluate the effect of random variation of treated volumes in between fractions in the 2D image based fractionated high dose rate brachytherapy for cervical cancer at National Cancer Institute Maharagama, Sri Lanka. Dose plans were analyzed for 150 cervical cancer patients with orthogonal radiographs (2D) based brachytherapy. ICRU treated volumes was modeled by translating the applicators with the help of “Multisource HDR plus software”. The difference of treated volumes with respect to the applicator geometry was analyzed by using SPSS 18 software; to derived patient population based estimates of delivered treated volumes relative to ideally treated volumes. Packing was evaluated according to bladder dose, rectum dose and geometry of the dose distribution by three consultant radiation oncologist. The difference of treated volumes depends on types of the applicators, which was used in fractionated brachytherapy. The means of the “Difference of Treated Volume” (DTV) for “Evenly activated tandem (ET)” length” group was ((X_1)) -0.48 cm3 and ((X_2)) 11.85 cm3 for “Unevenly activated tandem length (UET) group. The range of the DTV for ET group was 35.80 cm3 whereas UET group 104.80 cm3. One sample T test was performed to compare the DTV with “Ideal treatment volume difference (0.00cm3)”. It is evident that P value was 0.732 for ET group and for UET it was 0.00 moreover independent two sample T test was performed to compare ET and UET groups and calculated P value was 0.005. Packing was evaluated under three categories 59.38% used “Convenient Packing Technique”, 33.33% used “Fairly Packing Technique” and 7.29% used “Not Convenient Packing” in their fractionated brachytherapy treatments. Random variation of treated volume in ET group is much lower than UET group and there is a significant difference (p<0.05) in between ET and UET groups which affects the dose distribution of the treatment. Furthermore, it can be concluded nearly 92.71% patient’s packing were used acceptable packing technique at NCIM, Sri Lanka.

Keywords: brachytherapy, cervical cancer, high dose rate, tandem, treated volumes

Procedia PDF Downloads 175
11246 Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization

Authors: Tomoaki Hashimoto

Abstract:

Recently, feedback control systems using random dither quantizers have been proposed for linear discrete-time systems. However, the constraints imposed on state and control variables have not yet been taken into account for the design of feedback control systems with random dither quantization. Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial and terminal time. An important advantage of model predictive control is its ability to handle constraints imposed on state and control variables. Based on the model predictive control approach, the objective of this paper is to present a control method that satisfies probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization. In other words, this paper provides a method for solving the optimal control problems subject to probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization.

Keywords: optimal control, stochastic systems, random dither, quantization

Procedia PDF Downloads 421
11245 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 65
11244 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 279
11243 Dynamic Variation in Nano-Scale CMOS SRAM Cells Due to LF/RTS Noise and Threshold Voltage

Authors: M. Fadlallah, G. Ghibaudo, C. G. Theodorou

Abstract:

The dynamic variation in memory devices such as the Static Random Access Memory can give errors in read or write operations. In this paper, the effect of low-frequency and random telegraph noise on the dynamic variation of one SRAM cell is detailed. The effect on circuit noise, speed, and length of time of processing is examined, using the Supply Read Retention Voltage and the Read Static Noise Margin. New test run methods are also developed. The obtained results simulation shows the importance of noise caused by dynamic variation, and the impact of Random Telegraph noise on SRAM variability is examined by evaluating the statistical distributions of Random Telegraph noise amplitude in the pull-up, pull-down. The threshold voltage mismatch between neighboring cell transistors due to intrinsic fluctuations typically contributes to larger reductions in static noise margin. Also the contribution of each of the SRAM transistor to total dynamic variation has been identified.

Keywords: low-frequency noise, random telegraph noise, dynamic variation, SRRV

Procedia PDF Downloads 152
11242 A Convergent Interacting Particle Method for Computing Kpp Front Speeds in Random Flows

Authors: Tan Zhang, Zhongjian Wang, Jack Xin, Zhiwen Zhang

Abstract:

We aim to efficiently compute the spreading speeds of reaction-diffusion-advection (RDA) fronts in divergence-free random flows under the Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We study a stochastic interacting particle method (IPM) for the reduced principal eigenvalue (Lyapunov exponent) problem of an associated linear advection-diffusion operator with spatially random coefficients. The Fourier representation of the random advection field and the Feynman-Kac (FK) formula of the principal eigenvalue (Lyapunov exponent) form the foundation of our method implemented as a genetic evolution algorithm. The particles undergo advection-diffusion and mutation/selection through a fitness function originated in the FK semigroup. We analyze the convergence of the algorithm based on operator splitting and present numerical results on representative flows such as 2D cellular flow and 3D Arnold-Beltrami-Childress (ABC) flow under random perturbations. The 2D examples serve as a consistency check with semi-Lagrangian computation. The 3D results demonstrate that IPM, being mesh-free and self-adaptive, is simple to implement and efficient for computing front spreading speeds in the advection-dominated regime for high-dimensional random flows on unbounded domains where no truncation is needed.

Keywords: KPP front speeds, random flows, Feynman-Kac semigroups, interacting particle method, convergence analysis

Procedia PDF Downloads 20
11241 Numerical Study of Effects of Air Dam on the Flow Field and Pressure Distribution of a Passenger Car

Authors: Min Ye Koo, Ji Ho Ahn, Byung Il You, Gyo Woo Lee

Abstract:

Everything that is attached to the outside of the vehicle to improve the driving performance of the vehicle by changing the flow characteristics of the surrounding air or to pursue the external personality is called a tuning part. Typical tuning components include front or rear air dam, also known as spoilers, splitter, and side air dam. Particularly, the front air dam prevents the airflow flowing into the lower portion of the vehicle and increases the amount of air flow to the side and front of the vehicle body, thereby reducing lift force generation that lifts the vehicle body, and thus, improving the steering and driving performance of the vehicle. The purpose of this study was to investigate the role of anterior air dam in the flow around a sedan passenger car using computational fluid dynamics. The effects of flow velocity, trajectory of fluid particles on static pressure distribution and pressure distribution on body surface were investigated by varying flow velocity and size of air dam. As a result, it has been confirmed that the front air dam improves the flow characteristics, thereby reducing the generation of lift force of the vehicle, so it helps in steering and driving characteristics.

Keywords: numerical study, air dam, flow field, pressure distribution

Procedia PDF Downloads 184
11240 Peak Frequencies in the Collective Membrane Potential of a Hindmarsh-Rose Small-World Neural Network

Authors: Sun Zhe, Ruggero Micheletto

Abstract:

As discussed extensively in many studies, noise in neural networks have an important role in the functioning and time evolution of the system. The mechanism by which noise induce stochastic resonance enhancing and influencing certain operations is not clarified nor is the mechanism of information storage and coding. With the present research we want to study the role of noise, especially focusing on the frequency peaks in a three variable Hindmarsh−Rose Small−World network. We investigated the behaviour of the network to external noises. We demonstrate that a variation of signal to noise ratio of about 10 dB induces an increase in membrane potential signal of about 15%, averaged over the whole network. We also considered the integral of the whole membrane potential as a paradigm of internal noise, the one generated by the brain network. We showed that this internal noise is attenuated with the size of the network or with the number of random connections. By means of Fourier analysis we found that it has distinct peaks of frequencies, moreover, we showed that increasing the size of the network introducing more neurons, reduced the maximum frequencies generated by the network, whereas the increase in the number of random connections (determined by the small-world probability p) led to a trend toward higher frequencies. This study may give clues on how networks utilize noise to alter the collective behaviour of the system in their operations.

Keywords: neural networks, stochastic processes, small-world networks, discrete Fourier analysis

Procedia PDF Downloads 271
11239 Effect of Climate Change on the Genomics of Invasiveness of the Whitefly Bemisia tabaci Species Complex by Estimating the Effective Population Size via a Coalescent Method

Authors: Samia Elfekih, Wee Tek Tay, Karl Gordon, Paul De Barro

Abstract:

Invasive species represent an increasing threat to food biosecurity, causing significant economic losses in agricultural systems. An example is the sweet potato whitefly, Bemisia tabaci, which is a complex of morphologically indistinguishable species causing average annual global damage estimated at US$2.4 billion. The Bemisia complex represents an interesting model for evolutionary studies because of their extensive distribution and potential for invasiveness and population expansion. Within this complex, two species, Middle East-Asia Minor 1 (MEAM1) and Mediterranean (MED) have invaded well beyond their home ranges whereas others, such as Indian Ocean (IO) and Australia (AUS), have not. In order to understand why some Bemisia species have become invasive, genome-wide sequence scans were used to estimate population dynamics over time and relate these to climate. The Bayesian Skyline Plot (BSP) method as implemented in BEAST was used to infer the historical effective population size. In order to overcome sampling bias, the populations were combined based on geographical origin. The datasets used for this particular analysis are genome-wide SNPs (single nucleotide polymorphisms) called separately in each of the following groups: Sub-Saharan Africa (Burkina Faso), Europe (Spain, France, Greece and Croatia), USA (Arizona), Mediterranean-Middle East (Israel, Italy), Middle East-Central Asia (Turkmenistan, Iran) and Reunion Island. The non-invasive ‘AUS’ species endemic to Australia was used as an outgroup. The main findings of this study show that the BSP for the Sub-Saharan African MED population is different from that observed in MED populations from the Mediterranean Basin, suggesting evolution under a different set of environmental conditions. For MED, the effective size of the African (Burkina Faso) population showed a rapid expansion ≈250,000-310,000 years ago (YA), preceded by a period of slower growth. The European MED populations (i.e., Spain, France, Croatia, and Greece) showed a single burst of expansion at ≈160,000-200,000 YA. The MEAM1 populations from Israel and Italy and the ones from Iran and Turkmenistan are similar as they both show the earlier expansion at ≈250,000-300,000 YA. The single IO population lacked the latter expansion but had the earlier one. This pattern is shared with the Sub-Saharan African (Burkina Faso) MED, suggesting IO also faced a similar history of environmental change, which seems plausible given their relatively close geographical distributions. In conclusion, populations within the invasive species MED and MEAM1 exhibited signatures of population expansion lacking in non-invasive species (IO and AUS) during the Pleistocene, a geological epoch marked by repeated climatic oscillations with cycles of glacial and interglacial periods. These expansions strongly suggested the potential of some Bemisia species’ genomes to affect their adaptability and invasiveness.

Keywords: whitefly, RADseq, invasive species, SNP, climate change

Procedia PDF Downloads 106
11238 Enzyme Involvement in the Biosynthesis of Selenium Nanoparticles by Geobacillus wiegelii Strain GWE1 Isolated from a Drying Oven

Authors: Daniela N. Correa-Llantén, Sebastián A. Muñoz-Ibacache, Mathilde Maire, Jenny M. Blamey

Abstract:

The biosynthesis of nanoparticles by microorganisms, on the contrary to chemical synthesis, is an environmentally-friendly process which has low energy requirements. In this investigation, we used the microorganism Geobacillus wiegelii, strain GWE1, an aerobic thermophile belonging to genus Geobacillus, isolated from a drying oven. This microorganism has the ability to reduce selenite evidenced by the change of color from colorless to red in the culture. Elemental analysis and composition of the particles were verified using transmission electron microscopy and energy-dispersive X-ray analysis. The nanoparticles have a defined spherical shape and a selenium elemental state. Previous experiments showed that the presence of the whole microorganism for the reduction of selenite was not necessary. The results strongly suggested that an intracellular NADPH/NADH-dependent reductase mediates selenium nanoparticles synthesis under aerobic conditions. The enzyme was purified and identified by mass spectroscopy MALDI-TOF TOF technique. The enzyme is a 1-pyrroline-5-carboxylate dehydrogenase. Histograms of nanoparticles sizes were obtained. Size distribution ranged from 40-160 nm, where 70% of nanoparticles have less than 100 nm in size. Spectroscopic analysis showed that the nanoparticles are composed of elemental selenium. To analyse the effect of pH in size and morphology of nanoparticles, the synthesis of them was carried out at different pHs (4.0, 5.0, 6.0, 7.0, 8.0). For thermostability studies samples were incubated at different temperatures (60, 80 and 100 ºC) for 1 h and 3 h. The size of all nanoparticles was less than 100 nm at pH 4.0; over 50% of nanoparticles have less than 100 nm at pH 5.0; at pH 6.0 and 8.0 over 90% of nanoparticles have less than 100 nm in size. At neutral pH (7.0) nanoparticles reach a size around 120 nm and only 20% of them were less than 100 nm. When looking at temperature effect, nanoparticles did not show a significant difference in size when they were incubated between 0 and 3 h at 60 ºC. Meanwhile at 80 °C the nanoparticles suspension lost its homogeneity. A change in size was observed from 0 h of incubation at 80ºC, observing a size range between 40-160 nm, with 20% of them over 100 nm. Meanwhile after 3 h of incubation at size range changed to 60-180 nm with 50% of them over 100 nm. At 100 °C the nanoparticles aggregate forming nanorod structures. In conclusion, these results indicate that is possible to modulate size and shape of biologically synthesized nanoparticles by modulating pH and temperature.

Keywords: genus Geobacillus, NADPH/NADH-dependent reductase, selenium nanoparticles, biosynthesis

Procedia PDF Downloads 290
11237 Data-Driven Simulations Tools for Der and Battery Rich Power Grids

Authors: Ali Moradiamani, Samaneh Sadat Sajjadi, Mahdi Jalili

Abstract:

Power system analysis has been a major research topic in the generation and distribution sections, in both industry and academia, for a long time. Several load flow and fault analysis scenarios have been normally performed to study the performance of different parts of the grid in the context of, for example, voltage and frequency control. Software tools, such as PSCAD, PSSE, and PowerFactory DIgSILENT, have been developed to perform these analyses accurately. Distribution grid had been the passive part of the grid and had been known as the grid of consumers. However, a significant paradigm shift has happened with the emergence of Distributed Energy Resources (DERs) in the distribution level. It means that the concept of power system analysis needs to be extended to the distribution grid, especially considering self sufficient technologies such as microgrids. Compared to the generation and transmission levels, the distribution level includes significantly more generation/consumption nodes thanks to PV rooftop solar generation and battery energy storage systems. In addition, different consumption profile is expected from household residents resulting in a diverse set of scenarios. Emergence of electric vehicles will absolutely make the environment more complicated considering their charging (and possibly discharging) requirements. These complexities, as well as the large size of distribution grids, create challenges for the available power system analysis software. In this paper, we study the requirements of simulation tools in the distribution grid and how data-driven algorithms are required to increase the accuracy of the simulation results.

Keywords: smart grids, distributed energy resources, electric vehicles, battery storage systsms, simulation tools

Procedia PDF Downloads 70
11236 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: generalized extreme values, likelihood estimation, precipitation data, Wakeby distribution

Procedia PDF Downloads 114
11235 Percolation Transition in an Agglomeration of Spherical Particles

Authors: Johannes J. Schneider, Mathias S. Weyland, Peter Eggenberger Hotz, William D. Jamieson, Oliver Castell, Alessia Faggian, Rudolf M. Füchslin

Abstract:

Agglomerations of polydisperse systems of spherical particles are created in computer simulations using a simplified stochastic-hydrodynamic model: Particles sink to the bottom of the cylinder, taking into account gravity reduced by the buoyant force, the Stokes friction force, the added mass effect, and random velocity changes. Two types of particles are considered, with one of them being able to create connections to neighboring particles of the same type, thus forming a network within the agglomeration at the bottom of a cylinder. Decreasing the fraction of these particles, a percolation transition occurs. The critical regime is determined by investigating the maximum cluster size and the percolation susceptibility.

Keywords: binary system, maximum cluster size, percolation, polydisperse

Procedia PDF Downloads 28
11234 Sequence Polymorphism and Haplogroup Distribution of Mitochondrial DNA Control Regions HVS1 and HVS2 in a Southwestern Nigerian Population

Authors: Ogbonnaya O. Iroanya, Samson T. Fakorede, Osamudiamen J. Edosa, Hadiat A. Azeez

Abstract:

The human mitochondrial DNA (mtDNA) is about 17 kbp circular DNA fragments found within the mitochondria together with smaller fragments of 1200 bp known as the control region. Knowledge of variation within populations has been employed in forensic and molecular anthropology studies. The study was aimed at investigating the polymorphic nature of the two hypervariable segments (HVS) of the mtDNA, i.e., HVS1 and HVS2, and to determine the haplogroup distribution among individuals resident in Lagos, Southwestern Nigeria. Peripheral blood samples were obtained from sixty individuals who are not related maternally, followed by DNA extraction and amplification of the extracted DNA using primers specific for the regions under investigation. DNA amplicons were sequenced, and sequenced data were aligned and compared to the revised Cambridge Reference Sequence (rCRS) GenBank Accession number: NC_012920.1) using BioEdit software. Results obtained showed 61 and 52 polymorphic nucleotide positions for HVS1 and HVS2, respectively. While a total of three indels mutation were recorded for HVS1, there were seven for HVS2. Also, transition mutations predominate nucleotide change observed in the study. Genetic diversity (GD) values for HVS1 and HVS2 were estimated to be 84.21 and 90.4%, respectively, while random match probability was 0.17% for HVS1 and 0.89% for HVS2. The study also revealed mixed haplogroups specific to the African (L1-L3) and the Eurasians (U and H) lineages. New polymorphic sites obtained from the study are promising for human identification purposes.

Keywords: hypervariable region, indels, mitochondrial DNA, polymorphism, random match probability

Procedia PDF Downloads 93
11233 Influence of Aluminium on Grain Refinement in As-Rolled Vanadium-Microalloyed Steels

Authors: Kevin Mark Banks, Dannis Rorisang Nkarapa Maubane, Carel Coetzee

Abstract:

The influence of aluminium content, reheating temperature, and sizing (final) strain on the as-rolled microstructure was systematically investigated in vanadium-microalloyed and C-Mn plate steels. Reheating, followed by hot rolling and air cooling simulations were performed on steels containing a range of aluminium and nitrogen contents. Natural air cooling profiles, corresponding to 6 and 20mm thick plates, were applied. The austenite and ferrite/pearlite microstructures were examined using light optical microscopy. Precipitate species and volume fraction were determined on selected specimens. No influence of aluminium content was found below 0.08% on the as-rolled grain size in all steels studied. A low Al-V-steel produced the coarsest initial austenite grain size due to AlN dissolution at low temperatures leading to abnormal grain growth. An Al-free V-N steel had the finest initial microstructure. Although the as-rolled grain size for 20mm plate was similar in all steels tested, the grain distribution was relatively mixed. The final grain size in 6mm plate was similar for most compositions; the exception was an as-cast V low N steel, where the size of the second phase was inversely proportional to the sizing strain. This was attributed to both segregation and a low VN volume fraction available for effective pinning of austenite grain boundaries during cooling. Increasing the sizing strain refined the microstructure significantly in all steels.

Keywords: aluminium, grain size, nitrogen, reheating, sizing strain, steel, vanadium

Procedia PDF Downloads 118
11232 Numerical Simulation of Flexural Strength of Steel Fiber Reinforced High Volume Fly Ash Concrete by Finite Element Analysis

Authors: Mahzabin Afroz, Indubhushan Patnaikuni, Srikanth Venkatesan

Abstract:

It is well-known that fly ash can be used in high volume as a partial replacement of cement to get beneficial effects on concrete. High volume fly ash (HVFA) concrete is currently emerging as a popular option to strengthen by fiber. Although studies have supported the use of fibers with fly ash, a unified model along with the incorporation into finite element software package to estimate the maximum flexural loads need to be developed. In this study, nonlinear finite element analysis of steel fiber reinforced high strength HVFA concrete beam under static loadings was conducted to investigate their failure modes in terms of ultimate load. First of all, the experimental investigation of mechanical properties of high strength HVFA concrete was done and validates with developed numerical model with the appropriate modeling of element size and mesh by ANSYS 16.2. To model the fiber within the concrete, three-dimensional random fiber distribution was simulated by spherical coordinate system. Three types of high strength HVFA concrete beams were analyzed reinforced with 0.5, 1 and 1.5% volume fractions of steel fibers with specific mechanical and physical properties. The result reveals that the use of nonlinear finite element analysis technique and three-dimensional random fiber orientation exhibited fairly good agreement with the experimental results of flexural strength, load deflection and crack propagation mechanism. By utilizing this improved model, it is possible to determine the flexural behavior of different types and proportions of steel fiber reinforced HVFA concrete beam under static load. So, this paper has the originality to predict the flexural properties of steel fiber reinforced high strength HVFA concrete by numerical simulations.

Keywords: finite element analysis, high volume fly ash, steel fibers, spherical coordinate system

Procedia PDF Downloads 116
11231 Fault Location Detection in Active Distribution System

Authors: R. Rezaeipour, A. R. Mehrabi

Abstract:

Recent increase of the DGs and microgrids in distribution systems, disturbs the tradition structure of the system. Coordination between protection devices in such a system becomes the concern of the network operators. This paper presents a new method for fault location detection in the active distribution networks, independent of the fault type or its resistance. The method uses synchronized voltage and current measurements at the interconnection of DG units and is able to adapt to changes in the topology of the system. The method has been tested on a 38-bus distribution system, with very encouraging results.

Keywords: fault location detection, active distribution system, micro grids, network operators

Procedia PDF Downloads 756
11230 The Effect of Diet Intervention for Breast Cancer: A Meta-Analysis

Authors: Bok Yae Chung, Eun Hee Oh

Abstract:

Breast cancer patients require more nutritional interventions than others. However, a few studies have attempted to assess the overall nutritional status, to reduce body weight and BMI by improving diet, and to improve the prognosis of cancer for breast cancer patients. The purpose of this study was to evaluate the effect of diet intervention in the breast cancer patients through meta-analysis. For the study purpose, 16 studies were selected by using PubMed, ScienceDirect, ProQuest and CINAHL. Meta-analysis was performed using a random-effects model, and the effect size on outcome variables in breast cancer was calculated. The effect size for outcome variables of diet intervention was a large effect size. For heterogeneity, moderator analysis was performed using intervention type and intervention duration. All moderators did not significant difference. Diet intervention has significant positive effects on outcome variables in breast cancer. As a result, it is suggested that the timing of the intervention should be no more than six months, but a strategy for sustaining long-term intervention effects should be added if nutritional intervention is to be administered for breast cancer patients in the future.

Keywords: breast cancer, diet, mete-analysis, intervention

Procedia PDF Downloads 404
11229 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations

Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos

Abstract:

Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.

Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest

Procedia PDF Downloads 151
11228 The Droplet Generation and Flow in the T-Shape Microchannel with the Side Wall Fluctuation

Authors: Yan Pang, Xiang Wang, Zhaomiao Liu

Abstract:

Droplet microfluidics, in which nanoliter to picoliter droplets acted as individual compartments, are common to a diverse array of applications such as analytical chemistry, tissue engineering, microbiology and drug discovery. The droplet generation in a simplified two dimension T-shape microchannel with the main channel width of 50 μm and the side channel width of 25 μm, is simulated to investigate effects of the forced fluctuation of the side wall on the droplet generation and flow. The periodic fluctuations are applied on a length of the side wall in the main channel of the T-junction with the deformation shape of the double-clamped beam acted by the uniform force, which varies with the flow time and fluctuation periods, forms and positions. The fluctuations under most of the conditions expand the distribution range of the droplet size but have a little effect on the average size, while the shape of the fixed side wall changes the average droplet size chiefly. Droplet sizes show a periodic pattern along the relative time when the fluctuation is forced on the side wall near the T-junction. The droplet emerging frequency is not varied by the fluctuation of the side wall under the same flow rate and geometry conditions. When the fluctuation period is similar with the droplet emerging period, the droplet size shows a nice stability as the no fluctuation case.

Keywords: droplet generation, droplet size, flow flied, forced fluctuation

Procedia PDF Downloads 260
11227 An Exploration of the Technical and Economic Feasibility of a Stand Alone Solar PV Generated DC Distribution System over AC Distribution System for Use in the Modern as Well as Future Houses of Isolated Areas

Authors: Alpesh Desai, Indrajit Mukhopadhyay

Abstract:

Standalone Photovoltaic (PV) systems are designed and sized to supply certain AC and/or DC electrical loads. In computers, consumer electronics and many small appliances as well as LED lighting the actual power consumed is DC. The DC system, which requires only voltage control, has many advantages such as feasible connection of the distributed energy sources and reduction of the conversion losses for DC-based loads. Also by using the DC power directly the cost of the size of the Inverter and Solar panel reduced hence the overall cost of the system reduced. This paper explores the technical and economic feasibility of supplying electrical power to homes/houses using DC voltage mains within the house. Theoretical calculated results are presented to demonstrate the advantage of DC system over AC system with PV on sustainable rural/isolated development.

Keywords: distribution system, energy efficiency, off-grid, stand-alone PV system, sustainability, techno-socio-economic

Procedia PDF Downloads 237
11226 Religion and Social Mobility: A Historical Study of Neovaishnavism of Srimanta Shankardeva

Authors: Satyajit Kalita

Abstract:

Assam from an early period has gone through various religious transformations and has witnessed its impact in different period. One of such epoch is the epoch of Srimanta Shankardeva. Srimanta Shankareva is regarded, as the greatest religious preacher and social reformer in the history of Assam. It was Shankardeva, who brought the faith of vaisnavite movement that prevailed in other parts of India. Before and during his time, the people of Assam were followers of Sakta worship, the worshipping of different gods and goddesses. People worshiped idols and offered sacrifices. Srimanta Shankardeva under the faith neo-vaishnavism and propagated the Eka-Saran-Naam-Dharm, through which spread the splendor of one and only Lord Vishnu or Krishna and abolished offering sacrifices. With the help of Eka-Saran-Naam-Dharma, Srimanta Shankardeva tries to vanish the superstitious beliefs and irrational practices of Assamese society. The NeoVaishnavite faith developed a democratic outlook which permeates the entire teachings and practices in Assamese people. His contributions not only made the foundations of Assamese literature, culture, and social structure but also established the super structures the upon. It is understood that all contributions of Srimanta Shankardeva bear his marks distinctively. Religion is said to be biggest and the most influential aspect in bringing about change in the society. In Assam, with the essence of neo-vaishnavism by Shankardeva and the emergence of the Eka-Saran-Naam-Dharma came into a huge Change to the region. The movement of religion brought about a social mobility to all sections of society. This paper is a mere initiative to look into the organizational structure of Srimanta Shankardeva Sangha and its maintenance of the ideology and principles without failure. It is aimed to examine the assimilation of different groups and communities of people under the fold of Srimanta Shankardeva Sangha.

Keywords: Neo-Vaishnavism, Srimanta Shankardeva, Srimanta Shankardeva Shangha, Eka-Saran-Naam-Dharma

Procedia PDF Downloads 176
11225 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 104
11224 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 200
11223 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations

Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri

Abstract:

Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.

Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size

Procedia PDF Downloads 194
11222 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks

Authors: T. Sattarpour, D. Nazarpour

Abstract:

This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.

Keywords: active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF)

Procedia PDF Downloads 270