• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Descriptive Statistics & Exploratory Data Analysis
  2. 83  Bootstrap Plot (Central Tendency)
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 83.1 Definition
  • 83.2 Horizontal axis for Simulated Statistics
  • 83.3 Vertical axis for Simulated Statistics
  • 83.4 Horizontal axis for Kernel Density of Simulated Statistics
  • 83.5 Vertical axis for Kernel Density of Simulated Statistics
  • 83.6 Horizontal axis for Notched Boxplots of Simulated Statistics
  • 83.7 Vertical axis for Notched Boxplots of Simulated Statistics
  • 83.8 R Module
    • 83.8.1 Public website
    • 83.8.2 RFC
  • 83.9 Purpose
  • 83.10 Pros & Cons
    • 83.10.1 Pros
    • 83.10.2 Cons
  • 83.11 Example
  1. Descriptive Statistics & Exploratory Data Analysis
  2. 83  Bootstrap Plot (Central Tendency)

83  Bootstrap Plot (Central Tendency)

The Bootstrap method (Efron 1979) is a random sampling technique which treats the sample as if it were a population. The underlying idea is that one can obtain an empirical distribution of (almost) any statistic by repetitively drawing a sample with replacement.

The Bootstrap Plot (for Central Tendency) can be applied to univariate data series which do not have serial correlation. In practice, this means that the standard i.i.d. bootstrap version is not appropriate for time series; they require a special type of bootstrapping (see Chapter 64, Blocked Bootstrap Plot).

83.1 Definition

The Bootstrap Plot for any given data series \(x\) with \(n\) observations and pre-specified parameter \(k \in \mathbb{N}_0\), is computed according to the following steps:

  • generate \(k\) samples \(z_j\) (for \(j=1,2,…,k\)) of size \(n\) (use sampling with replacement)
  • compute the following measures of Central Tendency: Arithmetic Mean, Median, Midrange, Harmonic Mean, and the Geometric Mean
  • compute a series of pre-specified Quantiles such as \(Quantile(q)_j\) for \(q\) = 0.005, 0.025, 0.25, 0.50, 0.75, 0.975, 0.995 and \(j = 1, 2, …,k\)
  • plot the simulated statistics
  • plot the Gaussian Kernel Density Plots for each simulated statistic
  • plot a summary based on Notched Boxplots for each simulated statistic

83.2 Horizontal axis for Simulated Statistics

The horizontal axis represents the value of the simulated statistic.

83.3 Vertical axis for Simulated Statistics

The vertical axis corresponds to the value of the Central Tendency measure that is drawn from the sample.

83.4 Horizontal axis for Kernel Density of Simulated Statistics

The horizontal axis represents the value of the simulated statistic.

83.5 Vertical axis for Kernel Density of Simulated Statistics

The vertical axis corresponds to the estimated density value.

83.6 Horizontal axis for Notched Boxplots of Simulated Statistics

The horizontal axis represents the measures of Central Tendency (listed in arbitrary order).

83.7 Vertical axis for Notched Boxplots of Simulated Statistics

The vertical axis corresponds to the values of the Central Tendency measures.

83.8 R Module

83.8.1 Public website

The Bootstrap Plot is available on the public website:

  • https://compute.wessa.net/rwasp_bootstrapplot1.wasp

83.8.2 RFC

When using the default profile, the Bootstrap Plot can be found under the “Descriptive / Bootstrap Plot” menu item.

To compute the Bootstrap Plot on your local machine, the following script can be used in the R console:

library(boot)
library(psych)

boot.stat <- function(s,i) {
  s.mean <- mean(s[i])
  s.median <- median(s[i])
  s.midrange <- (max(s[i]) + min(s[i])) / 2
  s.hmean <- harmonic.mean(s[i])
  s.gmean <- geometric.mean(s[i])
  c(s.mean, s.median, s.midrange, s.hmean, s.gmean)
}

x <- runif(n = 150, min = 10, max = 50)
par1 <- 200 # number of simulations
par2 <- 5 # significant digits

r <- boot(x, boot.stat, R = par1, stype = "i")
z <- data.frame(cbind(r$t[,1],r$t[,2],r$t[,3],r$t[,4],r$t[,5]))
colnames(z) <- list("mean","median","midrange","harmonic","geometric")

myq.1 <- 0.005
myq.2 <- 0.025
myq.3 <- 0.975
myq.4 <- 0.995

df = data.frame(statistic = c("mean",
                              "median",
                              "midrange",
                              "harmonic",
                              "geometric"),
                P0.5 = c(signif(quantile(r$t[,1],myq.1)[[1]], par2),
                       signif(quantile(r$t[,2],myq.1)[[1]], par2),
                       signif(quantile(r$t[,3],myq.1)[[1]], par2),
                       signif(quantile(r$t[,4],myq.1)[[1]], par2),
                       signif(quantile(r$t[,5],myq.1)[[1]], par2)
                       ),
                P2.5 = c(signif(quantile(r$t[,1],myq.2)[[1]], par2),
                       signif(quantile(r$t[,2],myq.2)[[1]], par2),
                       signif(quantile(r$t[,3],myq.2)[[1]], par2),
                       signif(quantile(r$t[,4],myq.2)[[1]], par2),
                       signif(quantile(r$t[,5],myq.2)[[1]], par2)
                       ),
                Q1 = c(signif(quantile(r$t[,1],0.25)[[1]], par2),
                       signif(quantile(r$t[,2],0.25)[[1]], par2),
                       signif(quantile(r$t[,3],0.25)[[1]], par2),
                       signif(quantile(r$t[,4],0.25)[[1]], par2),
                       signif(quantile(r$t[,5],0.25)[[1]], par2)
                       ),
                Estimate = c(signif(r$t0[1], par2),
                             signif(r$t0[2], par2),
                             signif(r$t0[3], par2),
                             signif(r$t0[4], par2),
                             signif(r$t0[5], par2)
                             ),
                Q3 = c(signif(quantile(r$t[,1],0.75)[[1]], par2),
                       signif(quantile(r$t[,2],0.75)[[1]], par2),
                       signif(quantile(r$t[,3],0.75)[[1]], par2),
                       signif(quantile(r$t[,4],0.75)[[1]], par2),
                       signif(quantile(r$t[,5],0.75)[[1]], par2)
                       ),
                P97.5 = c(signif(quantile(r$t[,1],myq.3)[[1]], par2),
                        signif(quantile(r$t[,2],myq.3)[[1]], par2),
                        signif(quantile(r$t[,3],myq.3)[[1]], par2),
                        signif(quantile(r$t[,4],myq.3)[[1]], par2),
                        signif(quantile(r$t[,5],myq.3)[[1]], par2)
                        ),
                P99.5 = c(signif(quantile(r$t[,1],myq.4)[[1]], par2),
                        signif(quantile(r$t[,2],myq.4)[[1]], par2),
                        signif(quantile(r$t[,3],myq.4)[[1]], par2),
                        signif(quantile(r$t[,4],myq.4)[[1]], par2),
                        signif(quantile(r$t[,5],myq.4)[[1]], par2)
                        ),
                SD = c(signif(sd(r$t[,1]), par2),
                       signif(sd(r$t[,2]), par2),
                       signif(sd(r$t[,3]), par2),
                       signif(sd(r$t[,4]), par2),
                       signif(sd(r$t[,5]), par2)
                       ),
                IQR = c(signif(quantile(r$t[,1],0.75)[[1]] - quantile(r$t[,1],0.25)[[1]], par2),
                        signif(quantile(r$t[,2],0.75)[[1]] - quantile(r$t[,2],0.25)[[1]], par2),
                        signif(quantile(r$t[,3],0.75)[[1]] - quantile(r$t[,3],0.25)[[1]], par2),
                        signif(quantile(r$t[,4],0.75)[[1]] - quantile(r$t[,4],0.25)[[1]], par2),
                        signif(quantile(r$t[,5],0.75)[[1]] - quantile(r$t[,5],0.25)[[1]], par2)
                        )
                ) 
print(df)

op <- par(mfrow=c(2,3))
plot(density(r$t[,1]),main="Density Plot",xlab="mean")
plot(density(r$t[,2]),main="Density Plot",xlab="median")
plot(density(r$t[,3]),main="Density Plot",xlab="midrange")
plot(density(r$t[,4]),main="Density Plot",xlab="harmonic mean")
plot(density(r$t[,5]),main="Density Plot",xlab="geometric mean")
colnames(z) = c("mean", "median", "midrange", "harmonic", "geometric")
boxplot(z,notch=TRUE,ylab="simulated values",main="Bootstrap Simulation - Central Tendency")
grid()

par(op)
  statistic   P0.5   P2.5     Q1 Estimate     Q3  P97.5  P99.5      SD     IQR
1      mean 29.495 29.806 31.054   31.663 32.335 33.285 33.936 0.91422 1.28140
2    median 29.100 29.825 31.608   32.603 33.610 35.287 37.107 1.52210 2.00180
3  midrange 29.414 29.523 29.731   30.087 30.111 30.157 30.190 0.20982 0.38022
4  harmonic 24.567 24.765 25.880   26.630 27.292 28.894 29.517 1.05070 1.41180
5 geometric 27.259 27.376 28.621   29.310 29.920 31.310 31.697 0.98695 1.29890

To compute the Bootstrap Plot, the R code uses the libraries boot, and psych. The boot.stat function was created to define all the measures of Central Tendency that are included in the analysis.

83.9 Purpose

The Bootstrap Plot is used when it is important to compute and compare important Central Tendency measures with empirical confidence intervals. Some statistical measures do not have a theoretical distribution (e.g. the median): therefore, the bootstrap approach can be used to obtain confidence intervals.

83.10 Pros & Cons

83.10.1 Pros

The Bootstrap Plot has the following advantages:

  • It allows one to obtain confidence intervals for Central Tendency measures without the need to know the underlying distribution.
  • It provides a lot of useful information about the empirical distribution of the Central Tendency measures.

83.10.2 Cons

The Bootstrap Plot has the following disadvantages:

  • Most readers are not familiar with this type of analysis.
  • It cannot be computed with many statistical software packages.

83.11 Example

Interactive Shiny app (click to load).
Open in new tab

We generated a series of 100 random numbers based on the rand function (i.e. a “pseudo-random number generator”) in a spreadsheet. We know that the rand function generates numbers between 0 and 1 with an average of 0.5 (at least this is what we would expect).

The output shows the Quantiles for the various measures of Central Tendency. It is not surprising that the estimated Midrange is closest to the true value. Also observe how all other measures of Central Tendency have an uncertainty which is substantially larger than the one of the Midrange (which is has an extremely small Inter Quartile Range -- see Section 66.17).

The Bootstrap Plot shows the Notched Boxplots for each of the simulated measures of Central Tendency. The Midrange seems to achieve the best results because it estimates the right value (0.5) and has a low variability.

Efron, Bradley. 1979. “Bootstrap Methods: Another Look at the Jackknife.” The Annals of Statistics 7 (1): 1–26. https://doi.org/10.1214/aos/1176344552.
82  Conditional EDA: Panel Diagnostics
84  Survey Scores Rank Order Comparison

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences