Search results for: estimations of probability distributions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1950

Search results for: estimations of probability distributions

1860 Monotonicity of the Jensen Functional for f-Divergences via the Zipf-Mandelbrot Law

Authors: Neda Lovričević, Đilda Pečarić, Josip Pečarić

Abstract:

The Jensen functional in its discrete form is brought in relation to the Csiszar divergence functional, this time via its monotonicity property. This approach presents a generalization of the previously obtained results that made use of interpolating Jensen-type inequalities. Thus the monotonicity property is integrated with the Zipf-Mandelbrot law and applied to f-divergences for probability distributions that originate from the Csiszar divergence functional: Kullback-Leibler divergence, Hellinger distance, Bhattacharyya distance, chi-square divergence, total variation distance. The Zipf-Mandelbrot and the Zipf law are widely used in various scientific fields and interdisciplinary and here the focus is on the aspect of the mathematical inequalities.

Keywords: Jensen functional, monotonicity, Csiszar divergence functional, f-divergences, Zipf-Mandelbrot law

Procedia PDF Downloads 142
1859 Coverage Probability Analysis of WiMAX Network under Additive White Gaussian Noise and Predicted Empirical Path Loss Model

Authors: Chaudhuri Manoj Kumar Swain, Susmita Das

Abstract:

This paper explores a detailed procedure of predicting a path loss (PL) model and its application in estimating the coverage probability in a WiMAX network. For this a hybrid approach is followed in predicting an empirical PL model of a 2.65 GHz WiMAX network deployed in a suburban environment. Data collection, statistical analysis, and regression analysis are the phases of operations incorporated in this approach and the importance of each of these phases has been discussed properly. The procedure of collecting data such as received signal strength indicator (RSSI) through experimental set up is demonstrated. From the collected data set, empirical PL and RSSI models are predicted with regression technique. Furthermore, with the aid of the predicted PL model, essential parameters such as PL exponent as well as the coverage probability of the network are evaluated. This research work may assist in the process of deployment and optimisation of any cellular network significantly.

Keywords: WiMAX, RSSI, path loss, coverage probability, regression analysis

Procedia PDF Downloads 177
1858 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions

Authors: Timothy Kayode Samson, Adedoyin Isola Lawal

Abstract:

The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.

Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH

Procedia PDF Downloads 118
1857 Using Indigenous Games to Demystify Probability Theorem in Ghanaian Classrooms: Mathematical Analysis of Ampe

Authors: Peter Akayuure, Michael Johnson Nabie

Abstract:

Similar to many colonized nations in the world, one indelible mark left by colonial masters after Ghana’s independence in 1957 has been the fact that many contexts used to teach statistics and probability concepts are often alien and do not resonate with the social domain of our indigenous Ghanaian child. This has seriously limited the understanding, discoveries, and applications of mathematics for national developments. With the recent curriculum demands of making the Ghanaian child mathematically literate, this qualitative study involved video recordings and mathematical analysis of play sessions of an indigenous girl game called Ampe with the aim to demystify the concepts in probability theorem, which is applied in mathematics related fields of study. The mathematical analysis shows that the game of Ampe, which is widely played by school girls in Ghana, is suitable for learning concepts of the probability theorems. It was also revealed that as a girl game, the use of Ampe provides good lessons to educators, textbook writers, and teachers to rethink about the selection of mathematics tasks and learning contexts that are sensitive to gender. As we undertake to transform teacher education and student learning, the use of indigenous games should be critically revisited.

Keywords: Ampe, mathematical analysis, probability theorem, Ghanaian girl game

Procedia PDF Downloads 370
1856 Preliminary Results on a Maximum Mean Discrepancy Approach for Seizure Detection

Authors: Boumediene Hamzi, Turky N. AlOtaiby, Saleh AlShebeili, Arwa AlAnqary

Abstract:

We introduce a data-driven method for seizure detection drawing on recent progress in Machine Learning. The method is based on embedding probability measures in a high (or infinite) dimensional reproducing kernel Hilbert space (RKHS) where the Maximum Mean Discrepancy (MMD) is computed. The MMD is metric between probability measures that are computed as the difference between the means of probability measures after being embedded in an RKHS. Working in RKHS provides a convenient, general functional-analytical framework for theoretical understanding of data. We apply this approach to the problem of seizure detection.

Keywords: kernel methods, maximum mean discrepancy, seizure detection, machine learning

Procedia PDF Downloads 238
1855 Finite Sample Inferences for Weak Instrument Models

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. Finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: bootstrap, Instrumental Variable, Edgeworth expansions, Saddlepoint expansions

Procedia PDF Downloads 310
1854 Probability of Passing the Brake Test at Ministry of Transport Facilities of Each City at Alicante Region from Spain

Authors: Carolina Senabre Blanes, Sergio Valero Verdú, Emilio Velasco SáNchez

Abstract:

This research objective is to obtain a percentage of success for each Ministry of Transport (MOT) facilities of each city of the Alicante region from Comunidad Valenciana from Spain by comparing results obtained by using different brake testers. It has been studied which types of brake tester are being used at each city nowadays. Different types of brake testers are used at each city, and the mechanical engineering staffs from the Miguel Hernández University have studied differences between all of them, and have obtained measures from each type. A percentage of probability of success will be given to each MOT station when you try to pass the exam with the same car with same characteristics and the same wheels. In other words, parameters of the vehicle have been controlled to be the same at all tests; therefore, brake measurements variability will be due to the type of testers could be used at the MOT station. A percentage of probability to pass the brake exam at each city will be given by comparing results of tests.

Keywords: brake tester, Mot station, probability to pass the exam, brake tester characteristics

Procedia PDF Downloads 293
1853 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals

Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam

Abstract:

The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study

Procedia PDF Downloads 318
1852 Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.

Keywords: probability-based damage detection (PBDD), Kriging, surrogate modeling, uncertainty quantification, artificial intelligence, enhanced ideal gas molecular movement (EIGMM)

Procedia PDF Downloads 239
1851 The Beta-Fisher Snedecor Distribution with Applications to Cancer Remission Data

Authors: K. A. Adepoju, O. I. Shittu, A. U. Chukwu

Abstract:

In this paper, a new four-parameter generalized version of the Fisher Snedecor distribution called Beta- F distribution is introduced. The comprehensive account of the statistical properties of the new distributions was considered. Formal expressions for the cumulative density function, moments, moment generating function and maximum likelihood estimation, as well as its Fisher information, were obtained. The flexibility of this distribution as well as its robustness using cancer remission time data was demonstrated. The new distribution can be used in most applications where the assumption underlying the use of other lifetime distributions is violated.

Keywords: fisher-snedecor distribution, beta-f distribution, outlier, maximum likelihood method

Procedia PDF Downloads 347
1850 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 32
1849 Effects Induced by Dispersion-Promoting Cylinder on Fiber-Concentration Distributions in Pulp Suspension Flows

Authors: M. Sumida, T. Fujimoto

Abstract:

Fiber-concentration distributions in pulp liquid flows behind dispersion promoters were experimentally investigated to explore the feasibility of improving operational performance of hydraulic headboxes in papermaking machines. The proposed research was performed in the form of a basic test conducted on a screen-type model comprising a circular cylinder inserted within a channel. Tests were performed using pulp liquid possessing fiber concentrations ranging from 0.3-1.0 wt% under different flow velocities of 0.016-0.74 m/s. Fiber-concentration distributions were measured using the transmitted light attenuation method. Obtained test results were analyzed, and the influence of the flow velocities on wake characteristics behind the cylinder has been investigated with reference to findings of our preceding studies concerning pulp liquid flows in straight channels. Changes in fiber-concentration distribution along the flow direction were observed to be substantially large in the section from the cylinder to four times its diameter downstream of its centerline. Findings of this study provide useful information concerning the development of hydraulic headboxes.

Keywords: dispersion promoter, fiber-concentration distribution, hydraulic headbox, pulp liquid flow

Procedia PDF Downloads 346
1848 Investigation on Fischer-Tropsch Synthesis over Cobalt-Gadolinium Catalyst

Authors: Jian Huang, Weixin Qian, Haitao Zhang, Weiyong Ying

Abstract:

Cobalt-gadolinium catalyst for Fischer-Tropsch synthesis was prepared by impregnation method with commercial silica gel, and its texture properties were characterized by BET, XRD, and TPR. The catalytic performance of the catalyst was tested in a fixed bed reactor. The results showed that the addition of gadolinium to the cobalt catalyst might decrease the size of cobalt particles, and increased the dispersion of catalytic active cobalt phases. The carbon number distributions for the catalysts was calculated by ASF equation.

Keywords: Fischer-Tropsch synthesis, cobalt-based catalysts, gadolinium, carbon number distributions

Procedia PDF Downloads 379
1847 A Prediction Model of Tornado and Its Impact on Architecture Design

Authors: Jialin Wu, Zhiwei Lian, Jieyu Tang, Jingyun Shen

Abstract:

Tornado is a serious and unpredictable natural disaster, which has an important impact on people's production and life. The probability of being hit by tornadoes in China was analyzed considering the principles of tornado formation. Then some suggestions on layout and shapes for newly-built buildings were provided combined with the characteristics of tornado wind fields. Fuzzy clustering and inverse closeness methods were used to evaluate the probability levels of tornado risks in various provinces based on classification and ranking. GIS was adopted to display the results. Finally, wind field single-vortex tornado was studied to discuss the optimized design of rural low-rise houses in Yancheng, Jiangsu as an example. This paper may provide enough data to support building and urban design in some specific regions.

Keywords: tornado probability, computational fluid dynamics, fuzzy mathematics, optimal design

Procedia PDF Downloads 136
1846 Calibrating Risk Factors for Road Safety in Low Income Countries

Authors: Atheer Al-Nuaimi, Harry Evdorides

Abstract:

Daily, many individuals die or get harmed on streets around the globe, which requires more particular solutions for transport safety issues. International road assessment program (iRAP) is one of the models that are considering many variables which influence road user’s safety. In iRAP, roads have been partitioned into five-star ratings from 1 star (the most reduced level) to 5 star (the most noteworthy level). These levels are calculated from risk factors which represent the effect of the geometric and traffic conditions on rod safety. The result of iRAP philosophy are the countermeasures that can be utilized to enhance safety levels and lessen fatalities numbers. These countermeasures can be utilized independently as a single treatment or in combination with other countermeasures for a section or an entire road. There is a general understanding that the efficiency of a countermeasure is liable to reduction when it is used in combination with various countermeasures. That is, crash diminishment estimations of single countermeasures cannot be summed easily. In the iRAP model, the fatalities estimations are calculated using a specific methodology. However, this methodology suffers overestimations. Therefore, this study has developed a calibration method to estimate fatalities numbers more accurately.

Keywords: crash risk factors, international road assessment program, low-income countries, road safety

Procedia PDF Downloads 146
1845 The Modeling and Effectiveness Evaluation for Vessel Evasion to Acoustic Homing Torpedo

Authors: Li Minghui, Min Shaorong, Zhang Jun

Abstract:

This paper aims for studying the operational efficiency of surface warship’s motorized evasion to acoustic homing torpedo. It orderly developed trajectory model, self-guide detection model, vessel evasion model, as well as anti-torpedo error model in three-dimensional space to make up for the deficiency of precious researches analyzing two-dimensionally confrontational models. Then, making use of the Monte Carlo method, it carried out the simulation for the confrontation process of evasion in the environment of MATLAB. At last, it quantitatively analyzed the main factors which determine vessel’s survival probability. The results show that evasion relative bearing and speed will affect vessel’s survival probability significantly. Thus, choosing appropriate evasion relative bearing and speed according to alarming range and alarming relative bearing for torpedo, improving alarming range and positioning accuracy and reducing the response time against torpedo will improve the vessel’s survival probability significantly.

Keywords: acoustic homing torpedo, vessel evasion, monte carlo method, torpedo defense, vessel's survival probability

Procedia PDF Downloads 455
1844 Determinants of Probability Weighting and Probability Neglect: An Experimental Study of the Role of Emotions, Risk Perception, and Personality in Flood Insurance Demand

Authors: Peter J. Robinson, W. J. Wouter Botzen

Abstract:

Individuals often over-weight low probabilities and under-weight moderate to high probabilities, however very low probabilities are either significantly over-weighted or neglected. Little is known about factors affecting probability weighting in Prospect Theory related to emotions specific to risk (anticipatory and anticipated emotions), the threshold of concern, as well as personality traits like locus of control. This study provides these insights by examining factors that influence probability weighting in the context of flood insurance demand in an economic experiment. In particular, we focus on determinants of flood probability neglect to provide recommendations for improved risk management. In addition, results obtained using real incentives and no performance-based payments are compared in the experiment with high experimental outcomes. Based on data collected from 1’041 Dutch homeowners, we find that: flood probability neglect is related to anticipated regret, worry and the threshold of concern. Moreover, locus of control and regret affect probabilistic pessimism. Nevertheless, we do not observe strong evidence that incentives influence flood probability neglect nor probability weighting. The results show that low, moderate and high flood probabilities are under-weighted, which is related to framing in the flooding context and the degree of realism respondents attach to high probability property damages. We suggest several policies to overcome psychological factors related to under-weighting flood probabilities to improve flood preparations. These include policies that promote better risk communication to enhance insurance decisions for individuals with a high threshold of concern, and education and information provision to change the behaviour of internal locus of control types as well as people who see insurance as an investment. Multi-year flood insurance may also prevent short-sighted behaviour of people who have a tendency to regret paying for insurance. Moreover, bundling low-probability/high-impact risks with more immediate risks may achieve an overall covered risk which is less likely to be judged as falling below thresholds of concern. These measures could aid the development of a flood insurance market in the Netherlands for which we find to be demand.

Keywords: flood insurance demand, prospect theory, risk perceptions, risk preferences

Procedia PDF Downloads 274
1843 Use of Multistage Transition Regression Models for Credit Card Income Prediction

Authors: Denys Osipenko, Jonathan Crook

Abstract:

Because of the variety of the card holders’ behaviour types and income sources each consumer account can be transferred to a variety of states. Each consumer account can be inactive, transactor, revolver, delinquent, defaulted and requires an individual model for the income prediction. The estimation of transition probabilities between statuses at the account level helps to avoid the memorylessness of the Markov Chains approach. This paper investigates the transition probabilities estimation approaches to credit cards income prediction at the account level. The key question of empirical research is which approach gives more accurate results: multinomial logistic regression or multistage conditional logistic regression with binary target. Both models have shown moderate predictive power. Prediction accuracy for conditional logistic regression depends on the order of stages for the conditional binary logistic regression. On the other hand, multinomial logistic regression is easier for usage and gives integrate estimations for all states without priorities. Thus further investigations can be concentrated on alternative modeling approaches such as discrete choice models.

Keywords: multinomial regression, conditional logistic regression, credit account state, transition probability

Procedia PDF Downloads 486
1842 A Stochastic Approach to Extreme Wind Speeds Conditions on a Small Axial Wind Turbine

Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph

Abstract:

In this paper, to model a real life wind turbine, a probabilistic approach is proposed to model the dynamics of the blade elements of a small axial wind turbine under extreme stochastic wind speeds conditions. It was found that the power and the torque probability density functions even though decreases at these extreme wind speeds but are not infinite. Moreover, we also found that it is possible to stabilize the power coefficient (stabilizing the output power) above rated wind speeds by turning some control parameters. This method helps to explain the effect of turbulence on the quality and quantity of the harness power and aerodynamic torque.

Keywords: probability, probability density function, stochastic, turbulence

Procedia PDF Downloads 586
1841 Effectiveness of Self-Learning Module on the Academic Performance of Students in Statistics and Probability

Authors: Aneia Rajiel Busmente, Renato Gunio Jr., Jazin Mautante, Denise Joy Mendoza, Raymond Benedict Tagorio, Gabriel Uy, Natalie Quinn Valenzuela, Ma. Elayza Villa, Francine Yezha Vizcarra, Sofia Madelle Yapan, Eugene Kurt Yboa

Abstract:

COVID-19’s rapid spread caused a dramatic change in the nation, especially the educational system. The Department of Education was forced to adopt a practical learning platform without neglecting health, a printed modular distance learning. The Philippines' K–12 curriculum includes Statistics and Probability as one of the key courses as it offers students the knowledge to evaluate and comprehend data. Due to student’s difficulty and lack of understanding of the concepts of Statistics and Probability in Normal Distribution. The Self-Learning Module in Statistics and Probability about the Normal Distribution created by the Department of Education has several problems, including many activities, unclear illustrations, and insufficient examples of concepts which enables learners to have a difficulty accomplishing the module. The purpose of this study is to determine the effectiveness of self-learning module on the academic performance of students in the subject Statistics and Probability, it will also explore students’ perception towards the quality of created Self-Learning Module in Statistics and Probability. Despite the availability of Self-Learning Modules in Statistics and Probability in the Philippines, there are still few literatures that discuss its effectiveness in improving the performance of Senior High School students in Statistics and Probability. In this study, a Self-Learning Module on Normal Distribution is evaluated using a quasi-experimental design. STEM students in Grade 11 from National University's Nazareth School will be the study's participants, chosen by purposive sampling. Google Forms will be utilized to find at least 100 STEM students in Grade 11. The research instrument consists of 20-item pre- and post-test to assess participants' knowledge and performance regarding Normal Distribution, and a Likert scale survey to evaluate how the students perceived the self-learning module. Pre-test, post-test, and Likert scale surveys will be utilized to gather data, with Jeffreys' Amazing Statistics Program (JASP) software being used for analysis.

Keywords: self-learning module, academic performance, statistics and probability, normal distribution

Procedia PDF Downloads 114
1840 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model

Authors: Zina Benouaret, Djamil Aissani

Abstract:

In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.

Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis

Procedia PDF Downloads 249
1839 Confidence Intervals for Quantiles in the Two-Parameter Exponential Distributions with Type II Censored Data

Authors: Ayman Baklizi

Abstract:

Based on type II censored data, we consider interval estimation of the quantiles of the two-parameter exponential distribution and the difference between the quantiles of two independent two-parameter exponential distributions. We derive asymptotic intervals, Bayesian, as well as intervals based on the generalized pivot variable. We also include some bootstrap intervals in our comparisons. The performance of these intervals is investigated in terms of their coverage probabilities and expected lengths.

Keywords: asymptotic intervals, Bayes intervals, bootstrap, generalized pivot variables, two-parameter exponential distribution, quantiles

Procedia PDF Downloads 414
1838 Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery

Authors: S. Benkraouda, Z. Djelloul-Khedda, B. Yagoubi

Abstract:

we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.

Keywords: forest fire, forest fire detection, satellite image, normal distribution, theoretical gaussian model, thermal infrared matrix image

Procedia PDF Downloads 142
1837 'Call Drop': A Problem for Handover Minimizing the Call Drop Probability Using Analytical and Statistical Method

Authors: Anshul Gupta, T. Shankar

Abstract:

In this paper, we had analyzed the call drop to provide a good quality of service to user. By optimizing it we can increase the coverage area and also the reduction of interference and congestion created in a network. Basically handover is the transfer of call from one cell site to another site during a call. Here we have analyzed the whole network by two method-statistic model and analytic model. In statistic model we have collected all the data of a network during busy hour and normal 24 hours and in analytic model we have the equation through which we have to find the call drop probability. By avoiding unnecessary handovers we can increase the number of calls per hour. The most important parameter is co-efficient of variation on which the whole paper discussed.

Keywords: coefficient of variation, mean, standard deviation, call drop probability, handover

Procedia PDF Downloads 491
1836 Probability Model Accidents of Motorcyclist Based on Driver's Personality

Authors: Margareth E. Bolla, Ludfi Djakfar, Achmad Wicaksono

Abstract:

The increase in the number of motorcycle users in Indonesia is in line with the increase in accidents involving motorcycles. Several previous studies have shown that humans are the biggest factor causing accidents, and the driver's personality factor will affect his behavior on the road. This study was conducted to see how a person's personality traits will affect the probability of having an accident while driving. The Big Five Inventory (BFI) questionnaire and the Honda Riding Trainer (HRT) simulator were used as measuring tools, while the analysis carried out was logistic regression analysis. The results of the descriptive analysis of the respondent's personality based on the BFI show that the majority of drivers have the dominant character of neuroticism (34%), while the smallest group is the driver with the dominant type of openness character (6%). The percentage of motorists who were not involved in an accident was 54%. The results of the logistic regression analysis form a mathematical model as follows Y = -3.852 - 0.288 X1 + 0.596 X2 + 0.429 X3 - 0.386 X4 - 0.094 X5 + 0.436 X6 + 0.162 X7, where the results of hypothesis testing indicate that the variables openness, conscientiousness, extraversion, agreeableness, neuroticism, history of traffic accidents and age at starting driving did not have a significant effect on the probability of a motorcyclist being involved in an accident.

Keywords: accidents, BFI, probability, simulator

Procedia PDF Downloads 146
1835 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 318
1834 Evaluation of DNA Paternity Testing Accuracy of Child Trafficking Cases

Authors: Wing Kam Fung, Kexin Yu

Abstract:

Child trafficking has been a serious problem in modern China. The Chinese government has established a national anti-trafficking DNA database to help reunite missing children with their families. The database collects DNA information from missing children's parents, trafficked and homeless children, then conducts paternity tests to find matched pairs. This paper considers the matching accuracy in such cases by looking into the exclusion probability in paternity testing. First, the situation of child trafficking in China is introduced. Next, derivations of the exclusion probability for both one-parent and two-parents cases are given, followed by extension to allow for 1 or 2 mutations. The accuracy of paternity testing of child trafficking cases is then assessed using the exclusion probabilities and available data. Finally, the number of loci that should be used to ensure a correct match is investigated.

Keywords: child trafficking, DNA database, exclusion probability, paternity testing

Procedia PDF Downloads 457
1833 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 202
1832 Performance of Nakagami Fading Channel over Energy Detection Based Spectrum Sensing

Authors: M. Ranjeeth, S. Anuradha

Abstract:

Spectrum sensing is the main feature of cognitive radio technology. Spectrum sensing gives an idea of detecting the presence of the primary users in a licensed spectrum. In this paper we compare the theoretical results of detection probability of different fading environments like Rayleigh, Rician, Nakagami-m fading channels with the simulation results using energy detection based spectrum sensing. The numerical results are plotted as P_f Vs P_d for different SNR values, fading parameters. It is observed that Nakagami fading channel performance is better than other fading channels by using energy detection in spectrum sensing. A MATLAB simulation test bench has been implemented to know the performance of energy detection in different fading channel environment.

Keywords: spectrum sensing, energy detection, fading channels, probability of detection, probability of false alarm

Procedia PDF Downloads 532
1831 Unit Root Tests Based On the Robust Estimator

Authors: Wararit Panichkitkosolkul

Abstract:

The unit root tests based on the robust estimator for the first-order autoregressive process are proposed and compared with the unit root tests based on the ordinary least squares (OLS) estimator. The percentiles of the null distributions of the unit root test are also reported. The empirical probabilities of Type I error and powers of the unit root tests are estimated via Monte Carlo simulation. Simulation results show that all unit root tests can control the probability of Type I error for all situations. The empirical power of the unit root tests based on the robust estimator are higher than the unit root tests based on the OLS estimator.

Keywords: autoregressive, ordinary least squares, type i error, power of the test, Monte Carlo simulation

Procedia PDF Downloads 288