• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Hypothesis Testing
  2. 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 108.1 Theory
    • 108.1.1 Case 1: \(\sigma_1\) and \(\sigma_2\) are known and unequal
    • 108.1.2 Case 2: \(\sigma_1\) and \(\sigma_2\) are known and equal
    • 108.1.3 Case 3: \(\sigma_1\) and \(\sigma_2\) are unknown but equal
    • 108.1.4 Case 4: \(\sigma_1\) and \(\sigma_2\) are unknown and unequal
  • 108.2 Examples
    • 108.2.1 Case 1: \(\sigma_1\) and \(\sigma_2\) are known and unequal
    • 108.2.2 Case 2: \(\sigma_1\) and \(\sigma_2\) are known and equal
    • 108.2.3 Case 3: \(\sigma_1\) and \(\sigma_2\) are unknown but equal
    • 108.2.4 Case 4: \(\sigma_1\) and \(\sigma_2\) are unknown and unequal
  1. Hypothesis Testing
  2. 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples

108  Statistical Test of the difference between Means -- Independent/Unpaired Samples

108.1 Theory

We define a first population \(X_1 \sim \text{N}\left( \mu_1, \sigma_1^2 \right)\) from which a simple random sample is drawn of size \(n_1\) with sample mean \(\bar{x}_1 \sim \text{N} \left( \mu_1, \frac{\sigma_1^2}{n_1} \right)\).

We also define a second population \(X_2 \sim \text{N}\left( \mu_2, \sigma_2^2 \right)\) from which a simple random sample is drawn of size \(n_2\) with sample mean \(\bar{x}_2 \sim \text{N} \left( \mu_2, \frac{\sigma_2^2}{n_2} \right)\).

To test the difference between the sample means one can use the following equation:

\[ \left( \bar{x}_1 - \bar{x}_2 \right) \sim \text{N} \left( \left( \mu_1 - \mu_2 \right), \frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2} \right) \]

Depending on \(\sigma_1\) and \(\sigma_2\) one can distinguish between four cases, each of which is discussed in turn.

108.1.1 Case 1: \(\sigma_1\) and \(\sigma_2\) are known and unequal

The test statistic is defined as

\[ u = \frac{\left( \bar{x}_1 - \bar{x}_2 \right) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2} }} \]

for which it can be shown that \(u \sim \text{N}(0,1)\).

108.1.2 Case 2: \(\sigma_1\) and \(\sigma_2\) are known and equal

The test statistic is defined as

\[ u = \frac{\left( \bar{x}_1 - \bar{x}_2 \right) - \left( \mu_1 - \mu_2 \right) }{\sqrt{\sigma^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right) }} = \frac{\left( \bar{x}_1 - \bar{x}_2 \right) - (\mu_1 - \mu_2)}{\sigma \sqrt{\frac{n_1 + n_2}{n_1 \times n_2}}} \]

for which it can be shown that \(u \sim \text{N}(0,1)\).

108.1.3 Case 3: \(\sigma_1\) and \(\sigma_2\) are unknown but equal

108.1.3.1 Unequal Sample Sizes

The test statistic is defined as

\[ t = \frac{\left( \bar{x}_1 - \bar{x}_2 \right) - (\mu_1 - \mu_2)}{\sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right) }} = \frac{\left( \bar{x}_1 - \bar{x}_2 \right) - (\mu_1 - \mu_2)}{s_p \sqrt{\frac{n_1 + n_2}{n_1 \times n_2}}} \]

where the pooled variance \(s_p\) is defined as

\[ s_p^2 = \frac{s_1^2(n_1- 1) + s_2^2 (n_2 -1)}{n_1 + n_2 - 2} \]

and where the sample variances are estimated by

\[ s_i^2 = \frac{\sum_{j=1}^{n_i} \left( x_{ij} - \bar{x}_i \right)^2}{n_i - 1} \]

for \(i = 1, 2\).

It can be shown that \(t \sim t_{n_1+n_2-2}\).

108.1.3.2 Equal Sample Sizes

As shown in the previous description, the denominator of the test statistic is

\[ \sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right)} \]

where the pooled sample variance is

\[ s_p^2 = \frac{s_1^2 (n_1 - 1) + s_2^2(n_2 - 1)}{n_1 + n_2 - 2} \]

When both samples have the same size, the pooled variance becomes

\[ s_p^2 = \frac{(n-1)(s_1^2 + s_2^2)}{2(n-1)} = \frac{s_1^2 + s_2^2}{2} \]

Substituting this pooled variance into the denominator of the test statistic results in

\[ \sqrt{\frac{s_1^2 + s_2^2}{2} \left( \frac{1}{n} + \frac{1}{n} \right)} = \sqrt{\frac{s_1^2 + s_2^2}{2} \times \frac{2}{n}} = \sqrt{\frac{s_1^2 + s_2^2}{n}} \]

108.1.4 Case 4: \(\sigma_1\) and \(\sigma_2\) are unknown and unequal

Unequal Sample Sizes

The test statistic is defined as

\[ t = \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}} \]

where the sample variances are estimated by

\[ s_i^2 = \frac{\sum_{j=1}^{n_i} \left( x_{ij} - \bar{x}_i \right)^2}{n_i - 1} \]

with \(i = 1, 2\).

It can be shown that \(t \sim t_{\Delta}\) with \(\Delta\) degrees of freedom which can be computed as follows (Welch 1947; Satterthwaite 1946)

\[ \Delta = \frac{\left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)^2}{\frac{\left( \frac{s_1^2}{n_1} \right)^2}{n_1-1} + \frac{\left( \frac{s_2^2}{n_2} \right)^2}{n_2-1}} \]

The degrees of freedom \(\Delta\) will not necessarily be a whole number. As a close approximation the next smaller integer can be used.

108.1.4.1 Equal Sample Sizes

If both samples have the same size, the third and fourth test statistic are equal. The denominator of the test statistic for the 4th case is defined as

\[ \sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}} \]

When both samples have the same sample size this becomes

\[ \sqrt{\frac{s_1^2}{n} + \frac{s_2^2}{n}} = \sqrt{\frac{s_1^2 + s_2^2}{n}} \]

108.2 Examples

108.2.1 Case 1: \(\sigma_1\) and \(\sigma_2\) are known and unequal

108.2.1.1 Problem

Expected Value (Population 1) \(\mu_1\) ?
Variance (Population 1) \(\sigma_1^2\) 25
Expected Value (Population 2) \(\mu_2\) ?
Variance (Population 2) \(\sigma_2^2\) 9
Size of Sample 1 \(n_1\) 15
Mean of Sample 1 \(\bar{x}_1\) 100
Size of Sample 2 \(n_2\) 10
Mean of Sample 2 \(\bar{x}_2\) 103
Test Value H\(_0\) \(\mu_1 - \mu_2\) 0
Type I error \(\alpha\) 0.05
Critical Value \(c\) ?

We use the left-sided alternative \(H_A: \mu_1 - \mu_2 < 0\), so the critical region is of the form \(\bar{x}_1 - \bar{x}_2 \leq c\).

108.2.1.2 Critical Value (Region)

\[ \text{P}\left( \bar{x}_1 - \bar{x}_2 \leq c \right) = \alpha = 0.05 \]

\[ \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{c - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) = 0.05 \]

\[ \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{c - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) = 0.05 \]

\[ \text{P}(u \leq -1.644854) = 0.05 \]

Hence

\[ \begin{align*}\frac{c}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} &= -1.644854 \\c &= -1.644854 \times \sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}} \\&= -1.644854 \times \sqrt{\frac{25}{15} + \frac{9}{10}} \\&= -1.644854 \times \sqrt{2.56667} \\&= -2.635190\end{align*} \]

Since \(\bar{x}_1 - \bar{x}_2 = -3 < c = -2.63519\), it follows that we should reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\).

108.2.1.3 P-value

\[ \text{P}(\bar{x}_1 - \bar{x}_2 \leq -3) \]

\[ \begin{align*}\text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{-3 - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) &= \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{-3 - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) \\&= \text{P} \left( u \leq \frac{-3}{\sqrt{\frac{25}{15} + \frac{9}{10}}} \right) \\&= \text{P} \left( u \leq \frac{-3}{\sqrt{2.566667}} \right) \\&= \text{P} (u \leq -1.872563) \\&= 1 - 0.969436 \\&= 0.030564\end{align*} \]

Since the p-value is smaller than \(\alpha = 0.05\) we reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\).

108.2.2 Case 2: \(\sigma_1\) and \(\sigma_2\) are known and equal

108.2.2.1 Problem

Expected Value (Population 1) \(\mu_1\) ?
Variance (Population 1) \(\sigma_1^2\) 16
Expected Value (Population 2) \(\mu_2\) ?
Variance (Population 2) \(\sigma_2^2\) 16
Size of Sample 1 \(n_1\) 15
Mean of Sample 1 \(\bar{x}_1\) 100
Size of Sample 2 \(n_2\) 10
Mean of Sample 2 \(\bar{x}_2\) 103
Test Value H\(_0\) \(\mu_1 - \mu_2\) 0
Type I error \(\alpha\) 0.05
Critical Value \(c\) ?

108.2.2.2 Critical Value (Region)

\[ \text{P}\left( \bar{x}_1 - \bar{x}_2 \leq c \right) = \alpha = 0.05 \]

\[ \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{c - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) = 0.05 \]

\[ \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{c - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) = 0.05 \]

\[ \text{P}(u \leq -1.644854) = 0.05 \]

Hence

\[ \begin{align*}\frac{c}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} &= -1.644854 \\c &= -1.644854 \times \sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}} \\&= -1.644854 \times \sqrt{\frac{16}{15} + \frac{16}{10}} \\&= -1.644854 \times \sqrt{2.6667} \\&= -2.686035\end{align*} \]

Since \(\bar{x}_1 - \bar{x}_2 = -3 < c = -2.686035\), it follows that we should reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\).

108.2.2.3 P-value

\[ \text{P}(\bar{x}_1 - \bar{x}_2 \leq -3) \]

\[ \begin{align*}\text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{-3 - (\mu_1 - \mu_2)}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) &= \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \leq \frac{-3 - 0}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}} \right) \\&= \text{P} \left( u \leq \frac{-3}{\sqrt{\frac{16}{15} + \frac{16}{10}}} \right) \\&= \text{P} \left( u \leq \frac{-3}{\sqrt{2.66667}} \right) \\&= \text{P} (u \leq -1.837117) \\&= 1 - 0.966904 \\&= 0.033096\end{align*} \]

Since the p-value is smaller than \(\alpha = 0.05\) we reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\).

108.2.3 Case 3: \(\sigma_1\) and \(\sigma_2\) are unknown but equal

108.2.3.1 Problem

Expected Value (Population 1) \(\mu_1\) ?
Variance (Population 1) \(\sigma_1^2\) ?
Expected Value (Population 2) \(\mu_2\) ?
Variance (Population 2) \(\sigma_2^2\) ?
Size of Sample 1 \(n_1\) 15
Mean of Sample 1 \(\bar{x}_1\) 100
Variance of Sample 1 \(s_1^2\) 25
Size of Sample 2 \(n_2\) 10
Mean of Sample 2 \(\bar{x}_2\) 103
Variance of Sample 2 \(s_2^2\) 9
Test Value H\(_0\) \(\mu_1 - \mu_2\) 0
Type I error \(\alpha\) 0.05
Critical Value \(c\) ?

108.2.3.2 Critical Value (Region)

The pooled variance is

\[ \begin{align*}s_p^2 &= \frac{s_1^2 (n_1-1) + s_2^2(n_2 -1)}{n_1 + n_2 - 2} \\&= \frac{25(15-1)+9(10-1)}{15+10-2}\\&= \frac{25 \times 14 + 9 \times 9}{23} \\&= \frac{350+81}{23} \\&= \frac{431}{23} \\&= 18.7391304\end{align*} \]

Hence

\[ \text{P}\left( \bar{x}_1 - \bar{x}_2 \leq c \right) = \alpha = 0.05 \]

\[ \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} \leq \frac{c-(\mu_1-\mu_2)}{\sqrt{s_p^2\left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} \right) = 0.05 \]

\[ \text{P}\left( t_{23} \leq -1.713872 \right) = 0.05 \]

\[ \begin{align*}\frac{c}{\sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} &= -1.713872 \\c &= -1.713872 \times \sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right)} \\&= -1.713872 \times \sqrt{18.73913 \left( \frac{1}{15} + \frac{1}{10} \right) } \\&= -1.713872 \times \sqrt{3.1231884} \\&= -3.028847 \end{align*} \]

Since \(\bar{x}_1 - \bar{x}_2 = -3 > c = -3.028847\) there is no reason to reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\).

108.2.3.3 P-value

\[ \text{P}\left( \bar{x}_1 - \bar{x}_2 \leq -3 \right) = \text{p-value} \]

\[ \begin{align*}\text{p-value} &= \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} \leq \frac{-3 -(\mu_1-\mu_2)}{\sqrt{s_p^2\left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} \right) \\&= \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - 0}{\sqrt{s_p^2 \left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} \leq \frac{-3 -0}{\sqrt{s_p^2\left( \frac{1}{n_1} + \frac{1}{n_2} \right)}} \right)\\&= \text{P} \left( t_{23} \leq \frac{-3}{\sqrt{18.7391304 \left( \frac{1}{15} + \frac{1}{10} \right)}} \right) \\&= \text{P} \left( t_{23} \leq \frac{-3}{\sqrt{18.7391304 \times \frac{25}{100}}} \right) \\&= \text{P} \left( t_{23} \leq \frac{-3}{\sqrt{3.1231884}} \right) \\&= \text{P} \left( t_{23} \leq -1.69754839 \right) \\&= 1-0.948457 \\&= 0.051543\end{align*} \]

Since the p-value (=0.051543) is larger than the chosen type I error (=\(\alpha = 0.05\)) there is no reason to reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\).

108.2.4 Case 4: \(\sigma_1\) and \(\sigma_2\) are unknown and unequal

108.2.4.1 Problem

Expected Value (Population 1) \(\mu_1\) ?
Variance (Population 1) \(\sigma_1^2\) ?
Expected Value (Population 2) \(\mu_2\) ?
Variance (Population 2) \(\sigma_2^2\) ?
Size of Sample 1 \(n_1\) 15
Mean of Sample 1 \(\bar{x}_1\) 100
Variance of Sample 1 \(s_1^2\) 25
Size of Sample 2 \(n_2\) 10
Mean of Sample 2 \(\bar{x}_2\) 103
Variance of Sample 2 \(s_2^2\) 9
Test Value H\(_0\) \(\mu_1 - \mu_2\) 0
Type I error \(\alpha\) 0.05
Critical Value \(c\) ?

108.2.4.2 Critical Value (Region)

\[ \text{P}\left( \bar{x}_1 - \bar{x}_2 \leq c \right) = \alpha = 0.05 \]

\[ \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{ \left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}} \leq \frac{c-(\mu_1-\mu_2)}{\sqrt{\left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}} \right) = 0.05 \]

The number of degrees of freedom \(\Delta\) is given by

\[ \Delta = \frac{\left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)^2}{ \frac{\left( \frac{s_1^2}{n_1} \right)^2}{n_1-1} + \frac{\left( \frac{s_2^2}{n_2} \right)^2}{n_2-1}} = \frac{\left( \frac{250+135}{150} \right)^2}{\frac{\left( \frac{25}{15} \right)^2}{15-1} + \frac{\left( \frac{9}{10} \right)^2}{10-1} } = \frac{6.58778}{\frac{2.77778}{14} + \frac{0.81}{9}} = \frac{6.58778}{0.288413} = 22.8415 \]

It follows that

\[ \text{P}\left( t_{22} \leq -1.717144 \right) = 0.05 \]

\[ \begin{align*}\frac{c - (\mu_1 - \mu_2)}{\sqrt{ \left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}} &= -1.717144 \\c &= -1.717144 \times \sqrt{ \left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)} \\&= -1.717144 \times \sqrt{ \frac{25}{15} + \frac{9}{10} } \\&= -1.717144 \times \sqrt{2.566667} \\&= -2.751006 \end{align*} \]

Since \(\bar{x}_1 - \bar{x}_2 = -3 < c = -2.751006\) we reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\) and accept the Alternative Hypothesis.

108.2.4.3 P-value

\[ \text{P}\left( \bar{x}_1 - \bar{x}_2 \leq -3 \right) = \text{p-value} \]

\[ \begin{align*}\text{p-value} &= \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{\sqrt{ \left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}} \leq \frac{-3 -(\mu_1-\mu_2)}{\sqrt{\left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}} \right) \\&= \text{P} \left( \frac{(\bar{x}_1 - \bar{x}_2) - 0}{\sqrt{ \left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}} \leq \frac{-3 -0}{\sqrt{\left( \frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}} \right)\\&= \text{P} \left( t_{22} \leq \frac{-3}{\sqrt{ \left( \frac{25}{15} + \frac{9}{10} \right)}} \right) \\&= \text{P} \left( t_{22} \leq \frac{-3}{\sqrt{ 2.566667}} \right) \\&= \text{P} \left( t_{22} \leq \frac{-3}{1.602082} \right) \\&= \text{P} \left( t_{22} \leq -1.872563 \right) \\&= 1-0.96276 \\&= 0.03724\end{align*} \]

Since the p-value (=0.03724) is smaller than the chosen type I error (=\(\alpha = 0.05\)) we reject the Null Hypothesis H\(_0: \mu_1 - \mu_2 = 0\) and accept the Alternative Hypothesis.

Satterthwaite, Franklin E. 1946. “An Approximate Distribution of Estimates of Variance Components.” Biometrics Bulletin 2 (6): 110–14. https://doi.org/10.2307/3002019.
Welch, Bernard L. 1947. “The Generalization of ‘Student’s’ Problem When Several Different Population Variances Are Involved.” Biometrika 34 (1/2): 28–35. https://doi.org/10.1093/biomet/34.1-2.28.
107  Statistical Test of the Standard Deviation \(\sigma\)
109  Statistical Test of the difference between Means -- Dependent/Paired Samples

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences