Probability Distributions
This part is divided into two classes of models. The first group covers discrete distributions (Bernoulli, Binomial, Geometric, Negative Binomial, Hypergeometric, Multinomial, and Poisson), where probabilities are attached to countable outcomes through a probability mass function (PMF). The second group covers continuous distributions (Uniform, Normal, Chi, Chi-squared, Student’s t, F, Exponential, Lognormal, Gamma, and Beta), where probabilities are computed from a probability density function (PDF) and integrated through the cumulative distribution function (CDF). This distinction matters because formulas, interpretation, and software functions differ between PMF-based and PDF-based models.
The continuous block is organised in three groups. The first group (Uniform through F) contains the general-purpose distributions that underpin classical inference and hypothesis testing. The second group — Exponential, Lognormal, Gamma, and Beta — extends coverage to applied modelling: waiting times and time-to-failure (Exponential, Gamma), quantities driven by multiplicative growth (Lognormal), and proportions or probabilities that are themselves uncertain (Beta). The third group extends the toolkit further with thirteen additional distributions: Weibull (flexible hazard rates in reliability), Pareto (power-law heavy tails), Inverse Gamma (Bayesian variance prior), Rayleigh (2D Gaussian magnitude), Erlang (integer-stage queuing), Logistic (sigmoid CDF, logistic regression foundation), Laplace (double-Exponential, L1 regression), Gumbel (extreme-value maxima), Cauchy (undefined moments, CLT failure), Triangular (PERT scheduling with min/mode/max), Power (bounded proportions, Beta special case), Beta Prime (unbounded odds-ratio model), and the Sample Correlation distribution (exact null distribution for testing \(\rho = 0\)). A fourth group provides seven further distributions that complete the relationship network: Dirichlet (multivariate Beta for compositional data), GEV (unified extreme-value family), Fréchet (heavy-tailed extremes), Noncentral t and Noncentral F (power analysis for t-tests and ANOVA), Inverse Chi-squared (Bayesian variance prior), and Maxwell-Boltzmann (molecular speeds, Chi with \(k = 3\)). These distributions connect back to the earlier groups through algebraic relationships, discrete analogs, and asymptotic links, reinforcing the coherence of the overall framework. An interactive map of all distribution relationships is available in 51 Distribution Relationship Map.
Distribution Selection Guide
Use the following table as a quick first-pass guide when selecting a distributional model.
| Distribution | Data type | Support | Typical use-case |
|---|---|---|---|
| Bernoulli | Binary outcome (single trial) | \(\{0,1\}\) | One yes/no event (success/failure) |
| Binomial | Count of successes in fixed \(n\) trials | \(\{0,1,\dots,n\}\) | Number of successes from repeated Bernoulli trials |
| Geometric | Number of failures before first success | \(\{0,1,2,\dots\}\) | Attempts before first conversion/failure event |
| Negative Binomial | Number of failures before \(r\)-th success | \(\{0,1,2,\dots\}\) | Attempts before reaching a target number of successes |
| Hypergeometric | Count of successes in sample without replacement | \(\{\max(0,n-(N-M)),\dots,\min(n,M)\}\) | Audit/quality-control sampling from a finite population |
| Multinomial | Counts across \(K\) categories with fixed \(n\) | \(\{(x_1,\dots,x_K): \sum_k x_k=n\}\) | Category-count vectors (survey choices, class counts) |
| Poisson | Count data per interval | \(\{0,1,2,\dots\}\) | Number of events in time/space with approximately constant rate |
| Uniform \(U(a,b)\) | Continuous measurement | \([a,b]\) | Random sampling over an interval; simulation mechanism |
| Normal \(N(\mu,\sigma^2)\) | Continuous measurement | \(\mathbb{R}\) | Symmetric measurement noise, many aggregate phenomena |
| Chi \(\chi(n,\sigma)\) | Continuous nonnegative magnitude | \([0,\infty)\) | Norm / root-mean-square quantities from normal components |
| Chi-squared \(\chi^2(n)\) | Continuous nonnegative test statistic | \([0,\infty)\) | Variance-related statistics; sums of squared standard normals |
| Chi-squared \(\chi^2(n,\sigma)\) | Continuous nonnegative (scaled) statistic | \([0,\infty)\) | Scaled chi-squared forms under alternative parameterization |
| Student t \(t(n)\) | Continuous heavy-tailed statistic | \(\mathbb{R}\) | Mean inference when \(\sigma\) is unknown (especially small samples) |
| Fisher F \(F(m,n)\) | Continuous positive ratio statistic | \((0,\infty)\) | Ratio of variances; ANOVA and regression F-tests |
| Exponential \(\text{Exp}(\lambda)\) | Continuous nonneg waiting time | \([0,\infty)\) | Time between events; time to failure (constant hazard rate) |
| Lognormal \(\text{LnN}(\mu,\sigma^2)\) | Continuous positive measurement | \((0,\infty)\) | Multiplicative phenomena: income, prices, concentrations |
| Gamma \(\text{Gamma}(k,\lambda)\) | Continuous nonneg waiting time | \((0,\infty)\) | Waiting time until \(k\)-th event; flexible positive-skew model |
| Beta \(\text{Beta}(\alpha,\beta)\) | Continuous bounded proportion | \([0,1]\) | Proportions, rates, probabilities; Bayesian prior for Binomial |
| Weibull \(\text{Weibull}(k,\lambda)\) | Continuous nonneg lifetime | \([0,\infty)\) | Reliability; increasing hazard (\(k>1\)), constant (\(k=1\)), decreasing (\(k<1\)) |
| Pareto \(\text{Pareto}(x_m,\alpha)\) | Continuous power-law | \([x_m,\infty)\) | Income, city sizes, internet traffic: 80/20 rule |
| Inv. Gamma \(\text{InvGamma}(\alpha,\beta)\) | Continuous nonneg scale | \((0,\infty)\) | Bayesian prior for variance \(\sigma^2\); reciprocal of Gamma variates |
| Rayleigh \(\text{Rayleigh}(\sigma)\) | Continuous nonneg magnitude | \([0,\infty)\) | 2D Gaussian magnitude; wind speed; wireless signal envelope |
| Erlang \(\text{Erlang}(k,\lambda)\) | Continuous nonneg (integer phases) | \((0,\infty)\) | Total service time across \(k\) Exponential stages; queuing systems |
| Logistic \(\text{Logistic}(\mu,s)\) | Continuous symmetric | \(\mathbb{R}\) | Heavy-tailed Normal alternative; foundation of logistic regression |
| Laplace \(\text{Laplace}(\mu,b)\) | Continuous symmetric | \(\mathbb{R}\) | L1-regression residuals; double-Exponential impulsive-noise model |
| Gumbel \(\text{Gumbel}(\mu,\beta)\) | Continuous right-skewed | \(\mathbb{R}\) | Annual maxima: flood levels, wind speeds, maximum temperatures |
| Cauchy \(\text{Cauchy}(x_0,\gamma)\) | Continuous heavy-tailed | \(\mathbb{R}\) | Ratio of two Normals; all moments undefined; CLT does not apply |
| Power \(\text{Power}(\alpha)\) | Continuous bounded | \([0,1]\) | Bounded proportions skewed toward 0 or 1; Beta\((\alpha,1)\) |
| Beta Prime \(\text{BetaPrime}(\alpha,\beta,\theta)\) | Continuous nonneg ratio | \((0,\infty)\) | Odds ratios; Bayesian scale priors; unbounded Beta generalization |
| Triangular \(\text{Triangular}(a,b,c)\) | Continuous bounded | \([a,b]\) | PERT project risk; Monte Carlo when only min, max, mode are known |
| Corr. r \(\text{CorDist}(n)\) | Sampling distribution | \((-1,1)\) | Testing \(H_0:\rho=0\); exact distribution of sample correlation under independence |
| Dirichlet \(\text{Dir}(\boldsymbol{\alpha})\) | Multivariate continuous | \(K\)-simplex | Compositional data (market share, soil, diet); conjugate prior for Multinomial |
| GEV \(\text{GEV}(\mu,\sigma,\xi)\) | Continuous extreme value | Depends on \(\xi\) | Block maxima modelling: floods, wind speeds, financial tail risk |
| Fréchet \(\text{Fréchet}(\alpha,m,s)\) | Continuous heavy-tailed | \((m,\infty)\) | Maxima from heavy-tailed parents: extreme financial losses, catastrophic claims |
| Noncentral \(t\) \(t(\nu,\delta)\) | Continuous | \(\mathbb{R}\) | Power analysis and sample-size planning for \(t\)-tests |
| Noncentral \(F\) \(F(d_1,d_2,\lambda)\) | Continuous nonneg | \([0,\infty)\) | Power analysis for ANOVA and regression \(F\)-tests |
| Inv. Chi-sq \(\text{InvChi}^2(\nu)\) | Continuous nonneg | \((0,\infty)\) | Bayesian conjugate prior for Normal variance \(\sigma^2\) |
| Maxwell-Boltzmann \(\text{MB}(a)\) | Continuous nonneg | \([0,\infty)\) | Molecular speed distributions; Chi(\(k\!=\!3\)) with physical scaling |
| Gaussian Naive Bayes | Supervised classification with continuous predictors | Class labels with feature likelihoods on \(\mathbb{R}\) | Class prediction using Bayes theorem with class-specific normal likelihoods |
Goodness-of-Fit: Quick Intuition
When we say that a distribution “fits” data, we mean that the model-generated frequencies (or cumulative probabilities) are close to what we observed in the sample.
- A visual check (histogram + fitted curve, QQ-plot, ECDF comparison) is a useful first pass.
- A formal goodness-of-fit test quantifies mismatch and provides a p-value under a clear null hypothesis (“the data follow this distribution”).
- In this handbook, the formal treatment is given in Hypothesis Testing: see Pearson Chi-Squared Test (binned/count-data goodness-of-fit) and 125 Kolmogorov-Smirnov Test (CDF-based goodness-of-fit).
Statistical Measures for Probability Distributions
The mathematical description of Probability Distributions depends on several statistical concepts. You should have (at least) an intuitive understanding of the following relevant concepts (which can be found in Descriptive Statistics) before proceeding:
Arithmetic Mean
This is the most common measure of Central Tendency. Often this is simply referred to as “the mean” or “the average” which is obtained by dividing the sum of all observations by the number of observations.
Median
This is another measure of Central Tendency which is defined as the “middle observation” after having sorted the data. The Median is not affected by extremely small or large values.
Mode
The Mode can be interpreted as the value (of the dataset) which is most frequently observed. If every observation is thought of as a “vote” then the Mode is simply the “majority vote”.
Variance
The Variance is a measure of variation or uncertainty. A high Variance implies that the observations are very different from each other (and the mean). When the Variance is low, the observed values are close to each other (and the mean).
Histogram
If you are not familiar with the Histogram you are advised to read 56 Frequency Table, 61 Stem-and-Leaf Plot, and 62 Histogram first.
Skewness and Kurtosis
Data are said to be skewed when the distribution (as can be visualized by the Histogram) is not symmetric. Kurtosis is a property which measures the thickness of the tails of the distribution (as can be visualized by the Histogram).
In this handbook, the symbol \(g_2\) denotes regular kurtosis (not excess kurtosis), so a Normal Distribution has \(g_2 = 3\).
Centered and Uncentered Moments
Moments are mathematical constructs which can be used to characterize distributions and derive other statistics (such as the Arithmetic Mean, Variance, Skewness, and Kurtosis).