Search results for: probability distribution function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10118

Search results for: probability distribution function

9938 End-to-End Performance of MPPM in Multihop MIMO-FSO System Over Dependent GG Atmospheric Turbulence Channels

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of decode and forward (DF) multihop free space optical (FSO) scheme deploying multiple input multiple output (MIMO) configuration under gamma-gamma (GG) statistical distribution, that adopts M-ary pulse position modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of symbol-error rates (SERs) respectively. The probability density function (PDF)’s closed-form formula is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.

Keywords: free space optical, gamma gamma channel, radio frequency, decode and forward, multiple-input multiple-output, M-ary pulse position modulation, symbol error rate

Procedia PDF Downloads 220
9937 On Coverage Probability of Confidence Intervals for the Normal Mean with Known Coefficient of Variation

Authors: Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

Statistical inference of normal mean with known coefficient of variation has been investigated recently. This phenomenon occurs normally in environment and agriculture experiments when the scientist knows the coefficient of variation of their experiments. In this paper, we constructed new confidence intervals for the normal population mean with known coefficient of variation. We also derived analytic expressions for the coverage probability of each confidence interval. To confirm our theoretical results, Monte Carlo simulation will be used to assess the performance of these intervals based on their coverage probabilities.

Keywords: confidence interval, coverage probability, expected length, known coefficient of variation

Procedia PDF Downloads 357
9936 Efficient Design of Distribution Logistics by Using a Model-Based Decision Support System

Authors: J. Becker, R. Arnold

Abstract:

The design of distribution logistics has a decisive impact on a company's logistics costs and performance. Hence, such solutions make an essential contribution to corporate success. This article describes a decision support system for analyzing the potential of distribution logistics in terms of logistics costs and performance. In contrast to previous procedures of business process re-engineering (BPR), this method maps distribution logistics holistically under variable distribution structures. Combined with qualitative measures the decision support system will contribute to a more efficient design of distribution logistics.

Keywords: decision support system, distribution logistics, potential analyses, supply chain management

Procedia PDF Downloads 377
9935 Data-Driven Dynamic Overbooking Model for Tour Operators

Authors: Kannapha Amaruchkul

Abstract:

We formulate a dynamic overbooking model for a tour operator, in which most reservations contain at least two people. The cancellation rate and the timing of the cancellation may depend on the group size. We propose two overbooking policies, namely economic- and service-based. In an economic-based policy, we want to minimize the expected oversold and underused cost, whereas, in a service-based policy, we ensure that the probability of an oversold situation does not exceed the pre-specified threshold. To illustrate the applicability of our approach, we use tour package data in 2016-2018 from a tour operator in Thailand to build a data-driven robust optimization model, and we tested the proposed overbooking policy in 2019. We also compare the data-driven approach to the conventional approach of fitting data into a probability distribution.

Keywords: applied stochastic model, data-driven robust optimization, overbooking, revenue management, tour operator

Procedia PDF Downloads 106
9934 On q-Non-extensive Statistics with Non-Tsallisian Entropy

Authors: Petr Jizba, Jan Korbel

Abstract:

We combine an axiomatics of Rényi with the q-deformed version of Khinchin axioms to obtain a measure of information (i.e., entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus obtained is uniquely solved in terms of a one-parameter family of information measures. The ensuing maximal-entropy distribution is phrased in terms of a special function known as the Lambert W-function. We analyze the corresponding ‘high’ and ‘low-temperature’ asymptotics and reveal a non-trivial structure of the parameter space.

Keywords: multifractals, Rényi information entropy, THC entropy, MaxEnt, heavy-tailed distributions

Procedia PDF Downloads 410
9933 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 484
9932 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 259
9931 Improving Detection of Illegitimate Scores and Assessment in Most Advantageous Tenders

Authors: Hao-Hsi Tseng, Hsin-Yun Lee

Abstract:

The Most Advantageous Tender (MAT) has been criticized for its susceptibility to dictatorial situations and for its processing of same score, same rank issues. This study applies the four criteria from Arrow's Impossibility Theorem to construct a mechanism for revealing illegitimate scores in scoring methods. While commonly be used to improve on problems resulting from extreme scores, ranking methods hide significant defects, adversely affecting selection fairness. To address these shortcomings, this study relies mainly on the overall evaluated score method, using standardized scores plus normal cumulative distribution function conversion to calculate the evaluation of vender preference. This allows for free score evaluations, which reduces the influence of dictatorial behavior and avoiding same score, same rank issues. Large-scale simulations confirm that this method outperforms currently used methods using the Impossibility Theorem.

Keywords: Arrow’s impossibility theorem, cumulative normal distribution function, most advantageous tender, scoring method

Procedia PDF Downloads 439
9930 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors

Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi

Abstract:

In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.

Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment

Procedia PDF Downloads 197
9929 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 463
9928 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets

Authors: Yih-Wenn Laih

Abstract:

In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's  and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.

Keywords: co-movement, depreciation of Yen, rank correlation, stock market

Procedia PDF Downloads 211
9927 Design of Distribution Network for Gas Cylinders in Jordan

Authors: Hazem J. Smadi

Abstract:

Performance of a supply chain is directly related to a distribution network that entails the location of storing materials or products and how products are delivered to the end customer through different stages in the supply chain. This study analyses the current distribution network used for delivering gas cylinders to end customer in Jordan. Evaluation of current distribution has been conducted across customer service components. A modification on the current distribution network in terms of central warehousing in each city in the country improves the response time and customer experience. 

Keywords: distribution network, gas cylinder, Jordan, supply chain

Procedia PDF Downloads 434
9926 The Falling Point of Lubricant

Authors: Arafat Husain

Abstract:

The lubricants are one of the most used resource in today’s world. Lot of the superpowers are dependent on the lubricant resource for their country to function. To see that the lubricants are not adulterated we need to develop some efficient ways and to see which fluid has been added to the lubricant. So to observe the these malpractices in the lubricant we need to develop a method. We take a elastic ball and through it at probability circle in the submerged in the lubricant at a fixed force and see the distance of pitching and the point of fall. Then we the ratio of distance of falling to the distance of pitching and if the measured ratio is greater than one the fluid is less viscous and if the ratio is lesser than the lubricant is viscous. We will check the falling point of pure lubricant at fixed force and every pure lubricant would have a fixed falling point. After that we would adulterate the lubricant and note the falling point and if the falling point is less than the standard value then adulterate is solid and if the adulterate is liquid the falling point will be more than the standard value. Hence the comparison with the standard falling point will give the efficiency of the lubricant.

Keywords: falling point of lubricant, falling point ratios, probability circle, octane number

Procedia PDF Downloads 465
9925 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation

Authors: Sameer Jung Karki, Gokhan Saygili

Abstract:

The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.

Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation

Procedia PDF Downloads 155
9924 A Novel Probablistic Strategy for Modeling Photovoltaic Based Distributed Generators

Authors: Engy A. Mohamed, Y. G. Hegazy

Abstract:

This paper presents a novel algorithm for modeling photovoltaic based distributed generators for the purpose of optimal planning of distribution networks. The proposed algorithm utilizes sequential Monte Carlo method in order to accurately consider the stochastic nature of photovoltaic based distributed generators. The proposed algorithm is implemented in MATLAB environment and the results obtained are presented and discussed.

Keywords: comulative distribution function, distributed generation, Monte Carlo

Procedia PDF Downloads 554
9923 Assessing Functional Structure in European Marine Ecosystems Using a Vector-Autoregressive Spatio-Temporal Model

Authors: Katyana A. Vert-Pre, James T. Thorson, Thomas Trancart, Eric Feunteun

Abstract:

In marine ecosystems, spatial and temporal species structure is an important component of ecosystems’ response to anthropological and environmental factors. Although spatial distribution patterns and fish temporal series of abundance have been studied in the past, little research has been allocated to the joint dynamic spatio-temporal functional patterns in marine ecosystems and their use in multispecies management and conservation. Each species represents a function to the ecosystem, and the distribution of these species might not be random. A heterogeneous functional distribution will lead to a more resilient ecosystem to external factors. Applying a Vector-Autoregressive Spatio-Temporal (VAST) model for count data, we estimate the spatio-temporal distribution, shift in time, and abundance of 140 species of the Eastern English Chanel, Bay of Biscay and Mediterranean Sea. From the model outputs, we determined spatio-temporal clusters, calculating p-values for hierarchical clustering via multiscale bootstrap resampling. Then, we designed a functional map given the defined cluster. We found that the species distribution within the ecosystem was not random. Indeed, species evolved in space and time in clusters. Moreover, these clusters remained similar over time deriving from the fact that species of a same cluster often shifted in sync, keeping the overall structure of the ecosystem similar overtime. Knowing the co-existing species within these clusters could help with predicting data-poor species distribution and abundance. Further analysis is being performed to assess the ecological functions represented in each cluster.

Keywords: cluster distribution shift, European marine ecosystems, functional distribution, spatio-temporal model

Procedia PDF Downloads 166
9922 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity

Authors: Shivdayal Patel, Suhail Ahmad

Abstract:

Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.

Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling

Procedia PDF Downloads 254
9921 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 119
9920 Water Leakage Detection System of Pipe Line using Radial Basis Function Neural Network

Authors: A. Ejah Umraeni Salam, M. Tola, M. Selintung, F. Maricar

Abstract:

Clean water is an essential and fundamental human need. Therefore, its supply must be assured by maintaining the quality, quantity and water pressure. However the fact is, on its distribution system, leakage happens and becomes a common world issue. One of the technical causes of the leakage is a leaking pipe. The purpose of the research is how to use the Radial Basis Function Neural (RBFNN) model to detect the location and the magnitude of the pipeline leakage rapidly and efficiently. In this study the RBFNN are trained and tested on data from EPANET hydraulic modeling system. Method of Radial Basis Function Neural Network is proved capable to detect location and magnitude of pipeline leakage with of the accuracy of the prediction results based on the value of RMSE (Root Meant Square Error), comparison prediction and actual measurement approaches 0.000049 for the whole pipeline system.

Keywords: radial basis function neural network, leakage pipeline, EPANET, RMSE

Procedia PDF Downloads 334
9919 The Current Situation of Ang Thong Province’s Court Doll Distribution

Authors: Phutthiwat Waiyawuththanapoom

Abstract:

This research is objected to study the pattern and channel of distribution of Ang Thong’s court doll OTOP product and try to develop the quality of distribution of the court doll product. The population of this research is 50 court doll manufacturers of Ang Thong’s court doll. The data and information was collected by using the questionnaire and use percentage, mean and standard deviation as an analysis tools. The distribution channel of Ang Thong’s court doll can be separated into 3 channels which are direct distribution from the manufacturer, via the middleman and via the co-operated manufacturing group. In the direct distribution from the manufacturer channel, it was found that the manufacturer is given the highest rate of importance to how they keep the inventory. In the distribution via the middleman channel, it was found that the manufacturer is given the highest rate of importance to the distribution efficiency. But in the distribution via the co-operated manufacturing group, it was found that the manufacturer is given the highest rate of importance to the public relationship.

Keywords: distribution, court doll, Ang Thong province, business and social sciences

Procedia PDF Downloads 281
9918 Sequence Polymorphism and Haplogroup Distribution of Mitochondrial DNA Control Regions HVS1 and HVS2 in a Southwestern Nigerian Population

Authors: Ogbonnaya O. Iroanya, Samson T. Fakorede, Osamudiamen J. Edosa, Hadiat A. Azeez

Abstract:

The human mitochondrial DNA (mtDNA) is about 17 kbp circular DNA fragments found within the mitochondria together with smaller fragments of 1200 bp known as the control region. Knowledge of variation within populations has been employed in forensic and molecular anthropology studies. The study was aimed at investigating the polymorphic nature of the two hypervariable segments (HVS) of the mtDNA, i.e., HVS1 and HVS2, and to determine the haplogroup distribution among individuals resident in Lagos, Southwestern Nigeria. Peripheral blood samples were obtained from sixty individuals who are not related maternally, followed by DNA extraction and amplification of the extracted DNA using primers specific for the regions under investigation. DNA amplicons were sequenced, and sequenced data were aligned and compared to the revised Cambridge Reference Sequence (rCRS) GenBank Accession number: NC_012920.1) using BioEdit software. Results obtained showed 61 and 52 polymorphic nucleotide positions for HVS1 and HVS2, respectively. While a total of three indels mutation were recorded for HVS1, there were seven for HVS2. Also, transition mutations predominate nucleotide change observed in the study. Genetic diversity (GD) values for HVS1 and HVS2 were estimated to be 84.21 and 90.4%, respectively, while random match probability was 0.17% for HVS1 and 0.89% for HVS2. The study also revealed mixed haplogroups specific to the African (L1-L3) and the Eurasians (U and H) lineages. New polymorphic sites obtained from the study are promising for human identification purposes.

Keywords: hypervariable region, indels, mitochondrial DNA, polymorphism, random match probability

Procedia PDF Downloads 89
9917 On a Univalent Function and the Integral Means of Its Derivative

Authors: Shatha S. Alhily

Abstract:

The purpose of this research paper is to show all the possible values of the pth power of the integrable function which make the integral means of the derivative of univalent function existing and finite.

Keywords: derivative, integral means, self conformal maps, univalent function

Procedia PDF Downloads 598
9916 Effects of Folic Acid, Alone or in Combination with Other Nutrients on Homocysteine Level and Cognitive Function in Older People: A Systematic Review

Authors: Jiayan Gou, Kexin He, Xin Zhang, Fei Wang, Liuni Zou

Abstract:

Background: Homocysteine is a high-risk factor for cognitive decline, and folic acid supplementation can lower homocysteine levels. However, current clinical research results are inconsistent, and the effects of folic acid on homocysteine levels and cognitive function in older people are inconsistent. Objective: The objective of this study is to systematically evaluate the effects of folic acid alone or in combination with other nutrients on homocysteine levels and cognitive function in older adults. Methods: Systematic searches were conducted in five databases, including PubMed, Embase, the Cochrane Library, Web of Science, and CINAHL, from inception to June 1, 2023. Randomized controlled trials were included investigating the effects of folic acid alone or in combination with other nutrients on cognitive function in older people. Results: 17 articles were included, with six focusing on the effects of folic acid alone and 11 examining folic acid in combination with other nutrients. The study included 3,100 individuals aged 60 to 83.2 years, with a relatively equal gender distribution (approximately 51.82% male). Conclusion: Folic acid alone or combined with other nutrients can effectively lower homocysteine level and improve cognitive function in patients with mild cognitive impairment. But for patients with Alzheimer's disease and dementia, the intervention only can reduce the homocysteine level, but the improvement in cognitive function is not significant. In healthy older people, high baseline homocysteine levels (>11.3 μmol/L) and good ω-3 fatty acid status (>590 μmol/L) can enhance the improvement effect of folic acid on cognitive function. This trial has been registered on PROSPERO as CRD42023433096.

Keywords: B-complex vitamins, cognitive function, folic acid, homocysteine

Procedia PDF Downloads 38
9915 Regression for Doubly Inflated Multivariate Poisson Distributions

Authors: Ishapathik Das, Sumen Sen, N. Rao Chaganty, Pooja Sengupta

Abstract:

Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.

Keywords: copula, Gaussian copula, multivariate distributions, inflated distributios

Procedia PDF Downloads 132
9914 From Responses of Macroinvertebrate Metrics to the Definition of Reference Thresholds

Authors: Hounyèmè Romuald, Mama Daouda, Argillier Christine

Abstract:

The present study focused on the use of benthic macrofauna to define the reference state of an anthropized lagoon (Nokoué-Benin) from the responses of relevant metrics to proxies. The approach used is a combination of a joint species distribution model and Bayesian networks. The joint species distribution model was used to select the relevant metrics and generate posterior probabilities that were then converted into posterior response probabilities for each of the quality classes (pressure levels), which will constitute the conditional probability tables allowing the establishment of the probabilistic graph representing the different causal relationships between metrics and pressure proxies. For the definition of the reference thresholds, the predicted responses for low-pressure levels were read via probability density diagrams. Observations collected during high and low water periods spanning 03 consecutive years (2004-2006), sampling 33 macroinvertebrate taxa present at all seasons and sampling points, and measurements of 14 environmental parameters were used as application data. The study demonstrated reliable inferences, selection of 07 relevant metrics and definition of quality thresholds for each environmental parameter. The relevance of the metrics as well as the reference thresholds for ecological assessment despite the small sample size, suggests the potential for wider applicability of the approach for aquatic ecosystem monitoring and assessment programs in developing countries generally characterized by a lack of monitoring data.

Keywords: pressure proxies, bayesian inference, bioindicators, acadjas, functional traits

Procedia PDF Downloads 55
9913 Apricot Insurance Portfolio Risk

Authors: Kasirga Yildirak, Ismail Gur

Abstract:

We propose a model to measure hail risk of an Agricultural Insurance portfolio. Hail is one of the major catastrophic event that causes big amount of loss to an insurer. Moreover, it is very hard to predict due to its strange atmospheric characteristics. We make use of parcel based claims data on apricot damage collected by the Turkish Agricultural Insurance Pool (TARSIM). As our ultimate aim is to compute the loadings assigned to specific parcels, we build a portfolio risk model that makes use of PD and the severity of the exposures. PD is computed by Spherical-Linear and Circular –Linear regression models as the data carries coordinate information and seasonality. Severity is mapped into integer brackets so that Probability Generation Function could be employed. Individual regressions are run on each clusters estimated on different criteria. Loss distribution is constructed by Panjer Recursion technique. We also show that one risk-one crop model can easily be extended to the multi risk–multi crop model by assuming conditional independency.

Keywords: hail insurance, spherical regression, circular regression, spherical clustering

Procedia PDF Downloads 228
9912 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 42
9911 Performance Analysis of a Hybrid DF-AF Hybrid RF/FSO System under Gamma Gamma Atmospheric Turbulence Channel Using MPPM Modulation

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of hybrid amplify and forward - decode and forward (AF-DF) hybrid radio frequency/free space optical (RF/FSO) communication system, that adopts M-ary pulse position modulation (MPPM) techniques, is analyzed. Both exact and approximate symbol-error rates (SERs) are derived. The random variations of the received optical irradiance, produced by the atmospheric turbulence, is modeled by the gamma-gamma (GG) statistical distribution. A closed-form expression for the probability density function (PDF) is derived for the whole above system is obtained. Thanks to the use of hybrid AF-DF hybrid RF/FSO configuration and MPPM, the effects of atmospheric turbulence is mitigated; hence the capacity of combating atmospheric turbulence and the transmissitted signal quality are improved.

Keywords: free space optical, gamma gamma channel, radio frequency, decode and forward, error pointing, M-ary pulse position modulation, symbol error rate

Procedia PDF Downloads 255
9910 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem

Authors: Bidzina Matsaberidze

Abstract:

It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.

Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions

Procedia PDF Downloads 63
9909 Performance Analysis of M-Ary Pulse Position Modulation in Multihop Multiple Input Multiple Output-Free Space Optical System over Uncorrelated Gamma-Gamma Atmospheric Turbulence Channels

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of Decode and Forward (DF) multihop Free Space Optical ( FSO) scheme deploying Multiple Input Multiple Output (MIMO) configuration under Gamma-Gamma (GG) statistical distribution, that adopts M-ary Pulse Position Modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of Symbol-Error Rates (SERs) respectively. A closed form formula related to the Probability Density Function (PDF) is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.

Keywords: free space optical, multiple input multiple output, M-ary pulse position modulation, multihop, decode and forward, symbol error rate, gamma-gamma channel

Procedia PDF Downloads 174