• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Introduction to Probability
  2. 11  Problems
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 11.1 Bayes Theorem
    • 11.1.1 Task 1
    • 11.1.2 Task 2
    • 11.1.3 Task 3
    • 11.1.4 Task 4
  • 11.2 Law of large numbers
    • 11.2.1 Task 5
    • 11.2.2 Task 6
    • 11.2.3 Task 7
    • 11.2.4 Task 8
    • 11.2.5 Task 9
  1. Introduction to Probability
  2. 11  Problems

11  Problems

11.1 Bayes Theorem

11.1.1 Task 1

  • Problem
  • Compute
  • Solution

Consider the situation where a patient exhibits symptoms that are typical for a particular disease. The disease is relatively rare in this population, with a prevalence of 0.3% (i.e. it affects 3 out of every 1000 persons). A pharmaceutical company developed a diagnostic test that costs €100, which has a reported sensitivity of 90% (i.e. the probability of testing positive, given the patient has the disease). Based on historical data, the company computed that there is an overall probability of 7% of testing positive (and 93% of testing negative). Should the patient spend €100?

Interactive Shiny app (click to load).
Open in new tab

The following probabilities are available:

  • \(P(+) = 7\%\)
  • \(P(+ | D) = 90\%\)
  • \(P(D) = 0.3\%\)

where \(+\) stands for “testing positive” and \(D\) for “having the disease.”

According to Bayes’ Theorem, the probability of interest can be found as follows:

\(P(D | +) = \frac{P(+ | D) \times P(D)}{P(+)} = \frac{0.9 \times 0.003}{0.07} \simeq 3.86\%\)

Without the test, the patient has a probability of 0.3% of having the disease. If the patient undergoes the test and it comes back positive, this probability has increased to 3.86%, which is substantially higher than 0.3% but still very small in any absolute sense. Hence, the real question is whether this increase of probability is worth €100 or not. The answer to this question depends on the utility that the patient attributes to the posterior probability. If the disease is untreatable or if the treatment has severe side-effects (or decreases the patient’s quality of life) it is quite likely that the patient is better off if the money is not spent. How would you decide?

A practical way to formalize this is an expected-loss comparison: compute the expected loss without testing, then compare it to the expected loss with testing (including the €100 test cost and separate losses for false positives/false negatives after observing test outcomes). The preferred decision is the one with the lower expected loss.

11.1.2 Task 2

  • Problem
  • Solution

Considering the Likelihoods of Table 9.2 and a prior probability \(\text{P}(real) = 0.5\), what is

\(\text{P}(real|\text{secret wedding in royal family})\)?

We get different results, depending on the choice of \(\alpha\). Assuming that \(\alpha = 1\), we obtain the following posterior probability scores

\[ \begin{gather*} \text{P}(real | \text{secret wedding in royal family}) \propto \\ \text{P} (real) \times \text{P} (\text{secret wedding}_{\alpha = 1} | real) \times \text{P} (\text{royal family}_{\alpha = 1} | real) = \\ 0.5 \times 0.094 \times 0.014 \simeq 0.000658 \end{gather*} \]

and

\[ \begin{gather*} \text{P}(fake | \text{secret wedding in royal family}) \propto \\ \text{P} (fake) \times \text{P} (\text{secret wedding}_{\alpha = 1} | fake) \times \text{P} (\text{royal family}_{\alpha = 1}| fake) = \\ 0.5 \times 0.040 \times 0.088 \simeq 0.00176 \end{gather*} \]

Hence, the actual probability is

\(\text{P}(real|\text{secret wedding in royal family}) = \frac{0.000658}{0.000658+0.00176} \simeq 0.272\)

The word combination “secret wedding” is more likely and “royal family” is very unlikely in real news articles. The second word combination dominates, and for this reason the overall posterior probability of this news article being real is rather low (0.272).

Note: We ignore the word “in” because there are no likelihoods available for this word, nor would it make sense to compute them.

11.1.3 Task 3

  • Problem
  • Solution

In the previous task, we selected an arbitrary prior probability. What would happen if we use the data-based prior instead?

Assuming (again) that \(\alpha = 1\), and using the data-based prior that was discussed in the example (i.e. \(\text{P}(real) = 0.736\)), we obtain the following posterior probability scores

\[ \begin{gather*} \text{P}(real | \text{secret wedding in royal family}) \propto \\ \text{P} (real) \times \text{P} (\text{secret wedding}_{\alpha = 1} | real) \times \text{P} (\text{royal family}_{\alpha = 1} | real) = \\ 0.736 \times 0.094 \times 0.014 \simeq 0.000968576 \end{gather*} \]

and

\[ \begin{gather*} \text{P}(fake | \text{secret wedding in royal family}) \propto \\ \text{P} (fake) \times \text{P} (\text{secret wedding}_{\alpha = 1} | fake) \times \text{P} (\text{royal family}_{\alpha = 1}| fake) = \\ (1 - 0.736) \times 0.040 \times 0.088 \simeq 0.00092928 \end{gather*} \]

Hence, the actual probability is

\(\text{P}(real|\text{secret wedding in royal family}) = \frac{0.000968576}{0.000968576+0.00092928} \simeq 0.51\)

It can be concluded that the choice of the prior probability can have an important impact on the prediction that is made. Empirically speaking, the prevalence of real news is still quite large. Therefore we change our mind and conclude that it is slightly more probable that the news article is real.

11.1.4 Task 4

  • Problem
  • Solution

Suppose that the Naive Bayes model from the previous task, has a sensitivity of 93% (for detecting real news) and a specificity of 87%. What is the probability that the news article from the previous task (or any other prediction that would be made) which is predicted as real, is actually real?

We use the simple formulation of Bayes’ Theorem (i.e. Equation 7.3) which states that

\[ \begin{equation} \text{P}(real | prediction) = \frac{\text{P}(prediction | real) \text{P}(real)}{\text{P}(prediction)} \end{equation} \]

which becomes

\[ \begin{equation} \text{P}(real | +) = \frac{\text{P}(+ | real) \text{P}(real)}{\text{P}(+ | real) \text{P}(real) + \text{P}(+ | fake) \text{P}(fake) } \end{equation} \]

(note: the + represents a prediction that the news article is real)

or

\[ \begin{equation} \label{} \text{P}(real | +) = \frac{0.93 \times 0.736}{0.93 \times 0.736 + (1 - 0.87) \times (1 - 0.736) } \simeq 95.2\% \end{equation} \]

The same result can be obtained through the odds formula (i.e. Equation 7.4)

\[ \begin{equation} \label{} \frac{\text{P}(real | +)}{\text{P}(fake | +)} = \frac{\text{P}(+ | real)}{\text{P}(+ | fake)} \frac{\text{P}(real)}{\text{P}(fake)} = \frac{0.93}{(1 - 0.87)} \frac{0.736}{(1 - 0.736)} = \frac{0.68448}{0.03432} \end{equation} \]

which leads to a probability of \(0.68448 / (0.68448 + 0.03432) \simeq 95.2\%\).

11.2 Law of large numbers

11.2.1 Task 5

  • Problem
  • Solution

We consider the problem 13 in chapter 1 of Grinstead and Snell (2006) (based on the original study by Tversky and Kahneman (1974)):

The psychologist Tversky and his colleagues say that about four out of five people will answer (a) to the following question:

A certain town is served by two hospitals. In the larger hospital, about 45 babies are born each day, and in the smaller hospital, 15 babies are born each day. Although the overall proportion of boys is about 50 percent, the actual proportion at either hospital may be more or less than 50 percent on any day.

At the end of a year, which hospital will have a greater number of days on which more than 60 percent of the babies born were boys?

  1. the large hospital
  2. the small hospital
  3. neither -- the number of days will be about the same.

Assume that the probability of a baby boy is .5 (actual estimates make this more like .513). Decide, based on simulation, what the right answer is to the question. Can you suggest why so many people go wrong?

Investigate this spreadsheet and figure out how the solution works.

The spreadsheet simulates 45 births in the large hospital and 15 births in the small hospital for a total of 365 days by computing random numbers from the Uniform Distribution with the function RAND() (this produces a random number between 0 and 1). The formula

=IF(RAND()>=0.5,"Boy","Girl")

is used to decide whether the baby is a boy or a girl (i.e. the Uniform Distribution is converted into a Bernoulli Distribution -- these distributions will be explained at a later stage).

Based on the simulated births it is easy to count the number of boys and the number of girls for each day. If the number of boys is more than 60% in a particular day, a binary “success” variable is set to 1 (this is column AY for the big hospital and column U for the small one).

Now it is easy to compute the total number of “successes” of the Bernoulli trial (given the assumption that the probability of a boy being born is 50%) and divide it by the number of days. This ratio can be interpreted as the conditional probability that we want to estimate (as is described in the Theorems from Jeffreys’ axiom system).

11.2.2 Task 6

  • Problem
  • Compute
  • Solution

Use the Compute tab to compute a solution. Explain the solution in your own words!

Interactive Shiny app (click to load).
Open in new tab

The babies calculator can be used to compute a solution without the need of a spreadsheet. Using the default settings, the following results were obtained:

  • Probability of more than 60% of male births in Large Hospital = 0.0685
  • Probability of more than 60% of male births in Small Hospital = 0.1425

Note that your results will be different because the computation relies on the random number generator to simulate the probabilities. The figures that are produced by the simulation show that the estimated probabilities converge towards a certain value as the # of simulated days increases. The values listed above are simply the last values that were obtained during the simulation because it is believed that these are closest to the true probabilities. On the left side of the Figures there are a lot of fluctuations because the occurrence of a “success” (i.e. a day in which more than 60% of babies born are boys) is relatively rare.

11.2.3 Task 7

  • Problem
  • Solution

How could you increase the accuracy of the numerical solution? Note: later you will formulate a solution based on the Binomial Distribution -- for now it is sufficient to increase the accuracy of the solution by changing one of the simulation parameters (use the Compute tab of the previous task).

It is possible to increase the accuracy of the numerical simulation by increasing the number of simulations (i.e. the number of days). Instead of simulating just one year, we could simulate a much longer period (e.g. ten years).

The following results were obtained with ten years instead of just one:

  • Probability of more than 60% of male births in Large Hospital = 0.0663
  • Probability of more than 60% of male births in Small Hospital = 0.1403

Note: it is possible to solve this problem with the Binomial Distribution which is explained later.

11.2.4 Task 8

  • Problem
  • Solution

What would happen if we’d change the “percentage of male births per day” to 80%?

If the percentage of male births is 80% (instead of 60%) then the probabilities become much smaller. The Babies Calculator was used to simulate 10 years (with a percentage of 80%) which yields the following results:

  • Probability of more than 80% of male births in Large Hospital = 0
  • Probability of more than 80% of male births in Small Hospital = 0.002740

The Figure for the large hospital showed a flat line which means that in 10 years there wasn’t any day in which more than 80% of births were boys. Hence, the estimated probability is zero which is (obviously) not the correct answer because we know that it is (theoretically) possible that such a day occurs - even if the hospital is large. The problem is that the true probability is so small that it is difficult to estimate with a simulation procedure. We would need to run many additional simulations to obtain an accurate result (maybe even more than thousands of years).

The Figure for the small hospital showed that the estimated probability is still fluctuating at the end of year 10. Again, this means that we should further increase the number of simulations to obtain a more or less reliable result.

11.2.5 Task 9

  • Problem
  • Solution

Explain in your own words (i.e. without relying on mathematics) the (weak) Law of Large Numbers.

This is an attempt of making an informal description.

The Law of Large Numbers implies that the average result of random, independent events converges to a stable long-term value. This does not imply that the sequence of events converges immediately towards the true long-term value nor does it provide us any guarantees about the speed with which convergence towards the true long-term value is achieved.

Grinstead, Charles M., and Laurie J. Snell. 2006. Introduction to Probability. Version dated 4 July 2006. The CHANCE Project; American Mathematical Society. https://math.dartmouth.edu/~prob/prob/prob.pdf.
Tversky, Amos, and Daniel Kahneman. 1974. “Judgment Under Uncertainty: Heuristics and Biases.” Science 185 (4157): 1124–31. https://doi.org/10.1126/science.185.4157.1124.
Probability Distributions

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences