• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Lognormal
    • Pareto
    • Inverse Gamma

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Hypothesis Testing
  2. 96  Statistical Test of the Population Mean with unknown Variance
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution

    • 44  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 45  Types of Data
    • 46  Datasheets

    • 47  Frequency Plot (Bar Plot)
    • 48  Frequency Table
    • 49  Contingency Table
    • 50  Binomial Classification Metrics
    • 51  Confusion Matrix
    • 52  ROC Analysis

    • 53  Stem-and-Leaf Plot
    • 54  Histogram
    • 55  Data Quality Forensics
    • 56  Quantiles
    • 57  Central Tendency
    • 58  Variability
    • 59  Skewness & Kurtosis
    • 60  Concentration
    • 61  Notched Boxplot
    • 62  Scatterplot
    • 63  Pearson Correlation
    • 64  Rank Correlation
    • 65  Partial Pearson Correlation
    • 66  Simple Linear Regression
    • 67  Moments
    • 68  Quantile-Quantile Plot (QQ Plot)
    • 69  Normal Probability Plot
    • 70  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 71  Box-Cox Normality Plot
    • 72  Kernel Density Estimation
    • 73  Bivariate Kernel Density Plot
    • 74  Conditional EDA: Panel Diagnostics
    • 75  Bootstrap Plot (Central Tendency)
    • 76  Survey Scores Rank Order Comparison
    • 77  Cronbach Alpha

    • 78  Equi-distant Time Series
    • 79  Time Series Plot (Run Sequence Plot)
    • 80  Mean Plot
    • 81  Blocked Bootstrap Plot (Central Tendency)
    • 82  Standard Deviation-Mean Plot
    • 83  Variance Reduction Matrix
    • 84  (Partial) Autocorrelation Function
    • 85  Periodogram & Cumulative Periodogram

    • 86  Problems
  • Hypothesis Testing
    • 87  Normal Distributions revisited
    • 88  The Population
    • 89  The Sample
    • 90  The One-Sided Hypothesis Test
    • 91  The Two-Sided Hypothesis Test
    • 92  When to use a one-sided or two-sided test?
    • 93  What if \(\sigma\) is unknown?
    • 94  The Central Limit Theorem (revisited)
    • 95  Statistical Test of the Population Mean with known Variance
    • 96  Statistical Test of the Population Mean with unknown Variance
    • 97  Statistical Test of the Variance
    • 98  Statistical Test of the Population Proportion
    • 99  Statistical Test of the Standard Deviation \(\sigma\)
    • 100  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 101  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 102  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 103  Hypothesis Testing for Research Purposes
    • 104  Decision Thresholds, Alpha, and Confidence Levels
    • 105  Bayesian Inference for Decision-Making
    • 106  One Sample t-Test
    • 107  Skewness & Kurtosis Tests
    • 108  Paired Two Sample t-Test
    • 109  Wilcoxon Signed-Rank Test
    • 110  Unpaired Two Sample t-Test
    • 111  Unpaired Two Sample Welch Test
    • 112  Two One-Sided Tests (TOST) for Equivalence
    • 113  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 114  Bayesian Two Sample Test
    • 115  Median Test based on Notched Boxplots
    • 116  Chi-Squared Tests for Count Data
    • 117  Kolmogorov-Smirnov Test
    • 118  One Way Analysis of Variance (1-way ANOVA)
    • 119  Kruskal-Wallis Test
    • 120  Two Way Analysis of Variance (2-way ANOVA)
    • 121  Repeated Measures ANOVA
    • 122  Friedman Test
    • 123  Testing Correlations
    • 124  A Note on Causality

    • 125  Problems
  • Regression Models
    • 126  Simple Linear Regression Model (SLRM)
    • 127  Multiple Linear Regression Model (MLRM)
    • 128  Logistic Regression
    • 129  Generalized Linear Models
    • 130  Multinomial and Ordinal Logistic Regression
    • 131  Cox Proportional Hazards Regression
    • 132  Conditional Inference Trees
    • 133  Leaf Diagnostics for Conditional Inference Trees
    • 134  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 135  Problems
  • Introduction to Time Series Analysis
    • 136  Case: the Market of Health and Personal Care Products
    • 137  Decomposition of Time Series
    • 138  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 139  Introduction to Box-Jenkins Analysis
    • 140  Theoretical Concepts
    • 141  Stationarity
    • 142  Identifying ARMA parameters
    • 143  Estimating ARMA Parameters and Residual Diagnostics
    • 144  Forecasting with ARIMA models
    • 145  Intervention Analysis
    • 146  Cross-Correlation Function
    • 147  Transfer Function Noise Models
    • 148  General-to-Specific Modeling
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 96.1 Theory
    • 96.1.1 Case 1 (denominator \(n\) convention)
    • 96.1.2 Case 2 (standard unbiased sample variance)
  • 96.2 Software
  • 96.3 Practical Example
DRAFT This draft is under development — DO NOT CITE OR SHARE.
  1. Hypothesis Testing
  2. 96  Statistical Test of the Population Mean with unknown Variance

96  Statistical Test of the Population Mean with unknown Variance

96.1 Theory

96.1.1 Case 1 (denominator \(n\) convention)

Assume that

\[ \begin{cases} U = \frac{\bar{x} - \mu}{\frac{\sigma}{\sqrt{n}}} \sim \text{N}(0,1) \\ V = \frac{ns^2}{\sigma^2} \sim \chi_{n-1}^2\end{cases} \]

where

\[ \bar{x} \sim \text{N} \left( \mu, \frac{\sigma^2}{n} \right) \]

and

\[ \begin{cases} s^2 = \frac{\sum_{i=1}^{n}\left( x_i - \bar{x} \right)^2}{n} \\ \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i\end{cases} \]

and assume that \(U\) and \(V\) are independent.

Since the t-density is defined as

\[ \frac{U}{\sqrt{\frac{V}{n-1}}} = \frac{\text{N}(0,1)}{\sqrt{\frac{\chi_{n-1}^2}{n-1}}} \sim t_{n-1} \]

it follows that

\[ \begin{align*}\frac{\frac{\bar{x} - \mu}{\frac{\sigma}{\sqrt{n}}}}{\sqrt{\frac{\frac{ns^2}{\sigma^2}}{n-1}}} &= \frac{\bar{x} -\mu}{\frac{\sigma}{\sqrt{n}}} \times \frac{1}{\frac{s}{\sigma}\sqrt{\frac{n}{n-1}}} \\&= \frac{\bar{x} - \mu}{\frac{s}{\sqrt{n-1}}} \sim t_{n-1}\end{align*} \]

96.1.2 Case 2 (standard unbiased sample variance)

Case 2 uses the usual unbiased estimator of the variance (denominator \(n-1\)), which is the convention used in most textbooks and software (including t.test() in R).

Assume that

\[ \begin{cases}U = \frac{\bar{x} - \mu}{\frac{\sigma}{\sqrt{n}}} \sim \text{N}(0,1) \\V = \frac{(n-1)s^2}{\sigma^2} \sim \chi_{n-1}^2\end{cases} \]

where

\[ \bar{x} \sim \text{N} \left( \mu, \frac{\sigma^2}{n} \right) \]

and

\[ \begin{cases}s^2 = \frac{\sum_{i=1}^{n}\left( x_i - \bar{x} \right)^2}{n-1} \\\bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i\end{cases} \]

and assume that \(U\) and \(V\) are independent.

Since the t-density is defined as

\[ \frac{U}{\sqrt{\frac{V}{n-1}}} = \frac{\text{N}(0,1)}{\sqrt{\frac{\chi_{n-1}^2}{n-1}}} \sim t_{n-1} \]

it follows that

\[ \begin{align*}\frac{\frac{\bar{x} - \mu}{\frac{\sigma}{\sqrt{n}}}}{\sqrt{\frac{\frac{(n-1)s^2}{\sigma^2}}{n-1}}} &= \frac{\bar{x} -\mu}{\frac{\sigma}{\sqrt{n}}} \times \frac{1}{\frac{s}{\sigma}} \\&= \frac{\bar{x} - \mu}{\frac{s}{\sqrt{n}}} \sim t_{n-1}\end{align*} \]

96.2 Software

There are two R modules available to preform the one sample t-test. These are the URLs on the public website:

  • One Sample T-Test using Confidence Intervals:
    https://compute.wessa.net/rwasp_hypothesismeanu.wasp
  • One Sample T-Test using p-values:
    https://compute.wessa.net/rwasp_onesampletests_mean.wasp

In RFC these R modules can be found under the “Hypotheses / Empirical Tests” menu item.

To compute the One Sample T-Test on your local machine, the following script can be used in the R console:

set.seed(123)
x <- runif(25, 20, 40)
# compute confidence intervals
par1 = 0.95 #Confidence
par2 = 30 #Null Hypothesis
len <- length(x)
df <- len - 1
sd <- sd(x)
mx <- mean(x)
delta2 <- abs(qt((1-par1)/2,df)) * sd / sqrt(len)
delta1 <- abs(qt((1-par1),df)) * sd / sqrt(len)
#Sample size
len
#Sample standard deviation
sd
#Sample Mean
mx
#2-sided Confidence Interval
dum <- paste('[',mx-delta2)
dum <- paste(dum,',')
dum <- paste(dum,mx+delta2)
dum <- paste(dum,']')
dum
#Left-sided Confidence Interval
dum <- paste('[',mx-delta1)
dum <- paste(dum,', +inf ]')
dum
#Right-sided Confidence Interval
dum <- paste('[ -inf,',mx+delta1)
dum <- paste(dum,']')
dum
# compute two-sided interval and p-value
par1 = 'two.sided'
par2 = 0.95 #Confidence
par3 = 20 #Null Hypothesis
(tt <- t.test(x,mu=par3,alternative=par1,conf.level=par2))
[1] 25
[1] 6.010489
[1] 31.91119
[1] "[ 29.4301862828312 , 34.3922024969133 ]"
[1] "[ 29.8545466508001 , +inf ]"
[1] "[ -inf, 33.9678421289444 ]"

    One Sample t-test

data:  x
t = 9.9087, df = 24, p-value = 5.878e-10
alternative hypothesis: true mean is not equal to 20
95 percent confidence interval:
 29.43019 34.39220
sample estimates:
mean of x 
 31.91119 

96.3 Practical Example

A sample of intrinsic motivation scores was obtained from students in a statistics course. We wish to test the following hypothesis:

\[ \begin{cases}\text{H}_0: \mu = 19.8 \\\text{H}_A: \mu \neq 19.8\end{cases} \]

The sample data and computational results are available in the R module shown below. Do we have to accept or reject the Null Hypothesis if we choose a type I error of 3%?

This is a two-sided test because the alternative is pre-specified as \(\text{H}_A:\mu \neq 19.8\).

Interactive Shiny app (click to load).
Open in new tab

Answer: the sample mean \(\bar{x} \simeq 20.06778\) is significantly different from \(\mu_0 = 19.8\) because the p-value is \(0.02471 < 0.03\).

95  Statistical Test of the Population Mean with known Variance
97  Statistical Test of the Variance

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences