110 Statistical Test of the difference between Variances -- Independent/Unpaired Samples
110.1 Theory
We define a first population \(X_1 \sim \text{N}\left( \mu_1, \sigma_1^2 \right)\) from which a simple random sample is drawn of size \(n_1\) with sample mean \(\bar{x}_1 \sim \text{N} \left( \mu_1, \frac{\sigma_1^2}{n_1} \right)\).
We also define a second population \(X_2 \sim \text{N}\left( \mu_2, \sigma_2^2 \right)\) from which a simple random sample is drawn of size \(n_2\) with sample mean \(\bar{x}_2 \sim \text{N} \left( \mu_2, \frac{\sigma_2^2}{n_2} \right)\).
Testing variance equality is usually formulated with a ratio (rather than a difference) because, under normality, an appropriately scaled variance ratio has an exact F distribution.
Important robustness note: the classical F-test for variance ratios is sensitive to non-normality and outliers. If normality is doubtful, complement this test with robust diagnostics or resampling.
110.1.1 Case 1: \(\mu\) unknown
If the Sample Variance is computed by
\[ s_i^2 = \frac{1}{n_i} \sum_{j=1}^{n_i}\left( x_{ij} - \bar{x}_i \right)^2 \]
for \(i= 1, 2\) then we use the test statistic
\[ \frac{\frac{\frac{n_1 s_1^2}{\sigma^2}}{n_1-1}}{\frac{\frac{n_2 s_2^2}{\sigma^2}}{n_2-1}} = \frac{n_1 s_1^2(n_2 - 1)}{n_2 s_2^2(n_1 - 1)} \]
with the following distribution
\[ \frac{\frac{\chi^2(n_1-1)}{n_1 - 1}}{\frac{\chi^2(n_2-1)}{n_2-1}} \sim \text{F}(n_1-1, n_2-1) \]
If the Sample Variance is computed by
\[ s_i^2 = \frac{1}{n_i - 1} \sum_{j=1}^{n_i}\left( x_{ij} - \bar{x}_i \right)^2 \]
for \(i= 1, 2\) then we use the test statistic
\[ \frac{\frac{\frac{(n_1-1) s_1^2}{\sigma^2}}{n_1-1}}{\frac{\frac{(n_2-1) s_2^2}{\sigma^2}}{n_2-1}} = \frac{s_1^2}{s_2^2} \]
with the following distribution
\[ \frac{\frac{\chi^2(n_1-1)}{n_1 - 1}}{\frac{\chi^2(n_2-1)}{n_2-1}} \sim \text{F}(n_1-1, n_2-1) \]
110.1.2 Case 2: \(\mu\) known
If the Sample Variance is computed by
\[ s_i^2 = \frac{1}{n_i} \sum_{j=1}^{n_i}\left( x_{ij} - \mu_i \right)^2 \]
for \(i= 1, 2\) then we use the test statistic
\[ \frac{\frac{\frac{n_1 s_1^2}{\sigma^2}}{n_1}}{\frac{\frac{n_2 s_2^2}{\sigma^2}}{n_2}} = \frac{s_1^2}{s_2^2} \]
with the following distribution
\[ \frac{\frac{\chi^2(n_1)}{n_1}}{\frac{\chi^2(n_2)}{n_2}} \sim \text{F}(n_1, n_2) \]
If the Sample Variance is computed by
\[ s_i^2 = \frac{1}{n_i - 1} \sum_{j=1}^{n_i}\left( x_{ij} - \mu_i \right)^2 \]
for \(i = 1, 2\) then we use the test statistic
\[ \frac{\frac{\frac{(n_1-1) s_1^2}{\sigma^2}}{n_1}}{\frac{\frac{(n_2-1) s_2^2}{\sigma^2}}{n_2}} = \frac{(n_1-1)s_1^2 n_2}{(n_2-1)s_2^2 n_1} \]
with the following distribution
\[ \frac{\frac{\chi^2(n_1)}{n_1}}{\frac{\chi^2(n_2)}{n_2}} \sim \text{F}(n_1, n_2) \]
110.2 Example 1: Critical Value (Region)
110.2.1 Problem
| Expected Value (Population 1) | \(\mu_1\) | ? |
| Variance (Population 1) | \(\sigma_1^2\) | ? |
| Expected Value (Population 2) | \(\mu_2\) | ? |
| Variance (Population 2) | \(\sigma_2^2\) | ? |
| Size of Sample 1 | \(n_1\) | 13 |
| Variance of Sample 1 | \(s_1^2\) | 5.29 |
| Size of Sample 2 | \(n_2\) | 10 |
| Variance of Sample 2 | \(s_2^2\) | 7.84 |
| Test Value H\(_0\) | \(\frac{\sigma_2^2}{\sigma_1^2}\) | 1 |
| Type I error | \(\alpha\) | 0.05 |
| Critical Value | \(c\) | ? |
We test \(H_0: \sigma_1^2 = \sigma_2^2\) against the right-sided alternative \(H_A: \sigma_2^2/\sigma_1^2 > 1\). Therefore we use \(F = s_2^2/s_1^2\) with numerator degrees of freedom \(n_2-1=9\) and denominator degrees of freedom \(n_1-1=12\).
110.2.2 Solution
The test statistic is
\[ \frac{s_2^2}{s_1^2} = \frac{7.84}{5.29} = 1.48 \]
which, under \(H_0\), follows an F-distribution (right tail, df\(_1=9\), df\(_2=12\))
\[ \begin{align*}\text{P}\left( \text{F}_{9,12} \geq c \right) &= 0.05 \\c = 2.796\end{align*} \]
Since the ratio of the sample variances is 1.48 \(<\) 2.796, we fail to reject the Null Hypothesis H\(_0: \sigma_1^2 = \sigma_2^2\).
110.3 Example 2: p-value
110.3.1 Problem
| Expected Value (Population 1) | \(\mu_1\) | ? |
| Variance (Population 1) | \(\sigma_1^2\) | ? |
| Expected Value (Population 2) | \(\mu_2\) | ? |
| Variance (Population 2) | \(\sigma_2^2\) | ? |
| Size of Sample 1 | \(n_1\) | 13 |
| Variance of Sample 1 | \(s_1^2\) | 5.29 |
| Size of Sample 2 | \(n_2\) | 10 |
| Variance of Sample 2 | \(s_2^2\) | 7.84 |
| Test Value H\(_0\) | \(\frac{\sigma_2^2}{\sigma_1^2}\) | 1 |
| Type I error | \(\alpha\) | 0.05 |
| Critical Value | \(c\) | ? |
We again use the right-sided alternative \(H_A: \sigma_2^2/\sigma_1^2 > 1\), so the one-sided p-value is computed from \(F = s_2^2/s_1^2\) with df\(_1=9\) and df\(_2=12\).
110.3.2 Solution
The test statistic is
\[ \frac{s_2^2}{s_1^2} = \frac{7.84}{5.29} = 1.48 \]
which, under \(H_0\), follows an F-distribution (right tail, df\(_1=9\), df\(_2=12\))
\[ \text{P}\left( \text{F}_{9,12} \geq 1.48 \right) = 0.2579 \]
Since the p-value 0.2579 \(>\) 0.05, we fail to reject the Null Hypothesis H\(_0: \sigma_1^2 = \sigma_2^2\).