• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Hypothesis Testing
  2. 126  One Way Analysis of Variance (1-way ANOVA)
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 126.1 Hypotheses
  • 126.2 Analysis based on p-values and confidence intervals
    • 126.2.1 Software
    • 126.2.2 Data & Parameters
    • 126.2.3 Output
  • 126.3 Assumptions
  • 126.4 Alternatives
  1. Hypothesis Testing
  2. 126  One Way Analysis of Variance (1-way ANOVA)

126  One Way Analysis of Variance (1-way ANOVA)

126.1 Hypotheses

One Way Analysis of Variance is a natural extension of the Unpaired Two Sample t-Test and is used when there are more than two independent samples. Each sample can be thought of as corresponding to a different treatment group. Formally, we can write the Hypothesis Test as follows:

\[ \begin{cases}\text{H}_0: \mu_1 = \mu_2 = \mu_3 = … = \mu_k \\\text{H}_A: \exists i \neq j: \mu_i \neq \mu_j\end{cases} \]

where \(k\) is the number of independent samples or treatment groups.

In other words, we test whether there are at least two groups with different means. Imagine, there are three or more medical treatments available, one could use the One Way ANOVA test to quickly determine whether there is an effect (of treatment) or not. The test does not necessarily provide useful information about which treatment might be beneficial or not.

If we need detailed information about the effects of individual groups or treatments, it is necessary to compute a series of Unpaired Two Sample t-Tests for each pair of treatments or groups. Since this is a rather tedious/repetitive undertaking, we will use software which does this automatically. In other words, we will (even though this is not part of the ANOVA procedure) also be testing the following Hypothesis Tests:

\[ \begin{cases}\text{H}_0: \mu_i = \mu_j \\\text{H}_A: \mu_i \neq \mu_j\end{cases} \]

for all combinations \(i,j = 1, 2, …, k\) where \(i \neq j\) and \(k\) is equal to the number of independent samples.

The problem with performing a sequence of statistical Hypothesis Tests, however, is that this results in an increasing probability of making type I errors. Indeed, each test involves a type I error of \(\alpha\) and when we use \(\frac{k^2 -k}{2}\) t-Tests in sequence (this is the number of pairs that can be formed with \(k\) samples) we will end up with a cumulative type I error which is much higher than the originally anticipated \(\alpha\).

For this reason we must take into account the total number of pairwise comparisons (\(\frac{k^2 -k}{2}\)) that is computed. The p-values should be inflated in order to make sure that the overall type I error reflects the chosen \(\alpha\) level. The method that is employed in the software that we use (see next section) is called “Tukey’s Honestly Significant Differences Test” (Tukey 1949) and includes the two-sided 95% confidence intervals and p-values for each pair of samples or treatments.

The ANOVA test statistic is

\[ F = \frac{MS_{between}}{MS_{within}} = \frac{SS_{between}/(k-1)}{SS_{within}/(N-k)}, \]

which is compared with an \(F\) distribution with \((k-1, N-k)\) degrees of freedom.

126.2 Analysis based on p-values and confidence intervals

126.2.1 Software

The One Way ANOVA R Module can be found on the publicly available website:

  • https://compute.wessa.net/rwasp_One%20Factor%20ANOVA.wasp

The same R Module is also available in RFC under the “Hypotheses / Empirical Tests” menu item.

126.2.2 Data & Parameters

This R module contains the following fields:

  • Data X: a multivariate dataset containing quantitative data
  • Names of X columns: a space delimited list of names (one name for each column)
  • Response Variable: a positive integer value of the column in the multivariate dataset which corresponds to the response/endogenous variable (i.e. the variable we wish to explain or predict)
  • Factor Variable: a positive integer value of the column in the multivariate dataset which corresponds to the explanatory variable (i.e. a qualitative variable containing the single-quoted group labels)
  • Include Intercept Term. This parameter can be set to the following values:
    • FALSE
    • TRUE

126.2.3 Output

Consider the problem of measuring the effect of three therapies (“Family Therapy”, “Cognitive Behavior Therapy”, and “Control”) on the post-therapy weight of anorexia patients within an experimental (medical) setting. We do not only wish to determine whether there is a treatment effect or not: if there is a significant effect we also need to know which treatment is best.

The results from the One Way ANOVA analysis are shown below.

Interactive Shiny app (click to load).
Open in new tab

The “coefficients” show the effects of each treatment group. The first number (\(\bar{x}_1 = 85.697\) pounds) is the mean of the CBT (this is used as a “baseline”). The second and third number (\(\bar{x}_2 - \bar{x}_1 = -4.589\) and \(\bar{x}_3 - \bar{x}_1 = 4.798\)) show the effects of the Control and FT treatments. Note that the order in which these groups are listed is alphabetical. Hence, it would have been better to name the group of reference (in this case it is the placebo group) in such a way that it precedes to other groups alphabetically (e.g. A instead of Control).

In any case, it seems like Treatment FT has the most beneficial effect because the post-treatment weight is (on average) 4.798 pounds higher than for CBT.

The Analysis of Variance Table is used to assess the Null and Alternative Hypothesis and is based on the ratio of two Variances (i.e. the “explained” Variance divided by the “unexplained” Variance). Since the ratio of two Variances follows an F-Distribution, we need to use an F-Test (F value \(= 459.49/53.12 = 8.6506\)). The corresponding p-value is \(p \simeq 0.0004443\) which is certainly small enough for most researchers to reject the Null Hypothesis. We conclude that the hypothesis \(\mu_1 = \mu_2 = \mu_3\) must be rejected which means that there is a significant treatment effect.

For reporting, include an ANOVA effect size such as:

\[ \eta^2 = \frac{SS_{between}}{SS_{total}}. \]

Even though the Null Hypothesis was rejected, we still need to examine the table containing the so-called “Tukey’s Honestly Significant Differences” for each pair of treatments. The difference between Control and CBT, for instance, is -4.588859 which implies that \(\bar{x}_2 < \bar{x}_1\). The corresponding 95% confidence interval is [-9.303772, 0.1260527] which is large enough to contain zero. We conclude that the difference between Control and CBT is not significantly different from zero. The p-value (\(\simeq 0.0581141\)) is also too high to allow us to reject H\(_0\). Only the difference between FT and Control is significantly different from zero. This alone does not establish that FT is the best treatment overall; it only shows a significant difference for that pair.

Just as was the case for the Unpaired Two Sample t-Test, the One Way ANOVA test makes the assumption of equal Variances for each group. This can be assessed by the diagnostic Hypothesis Test called “Levene’s Test for Homogeneity of Variance” (Levene 1960) which is shown near the bottom of the output. The results show that the Null Hypothesis (i.e. Homogeneity of Variance) cannot be rejected. Hence, the underlying assumption of the One Way ANOVA test is satisfied.

To compute the One Way Analysis of Variance (1-way ANOVA) on your local machine, the following script can be used in the R console.

Note: this script reproduces the chapter example with anorexia data (same dataset as the embedded app).

library(car)
library(MASS)
x <- anorexia
par3 = TRUE # include constant term
xdf <- na.omit(data.frame(Response = x$Postwt, Treatment = as.factor(x$Treat)))
myformula <- Response ~ Treatment
myformulam1 <- Response ~ Treatment - 1
if(par3 == FALSE) (lmxdf <- lm(myformulam1, data = xdf) ) else
(lmxdf <- lm(myformula, data = xdf) )
(aov.xdf<-aov(lmxdf) )
(anova.xdf<-anova(lmxdf) )
if(par3==TRUE){
  thsd<-TukeyHSD(aov.xdf)
  print(thsd)
} else {
  print('Must Include Intercept to use Tukey Test')
}
(lt.lmxdf<-leveneTest(lmxdf))

Call:
lm(formula = myformula, data = xdf)

Coefficients:
  (Intercept)  TreatmentCont    TreatmentFT  
       85.697         -4.589          4.798  

Call:
   aov(formula = lmxdf)

Terms:
                Treatment Residuals
Sum of Squares    918.987  3665.058
Deg. of Freedom         2        69

Residual standard error: 7.288126
Estimated effects may be unbalanced
Analysis of Variance Table

Response: Response
          Df Sum Sq Mean Sq F value    Pr(>F)    
Treatment  2  919.0  459.49  8.6506 0.0004443 ***
Residuals 69 3665.1   53.12                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
  Tukey multiple comparisons of means
    95% family-wise confidence level

Fit: aov(formula = lmxdf)

$Treatment
              diff       lwr        upr     p adj
Cont-CBT -4.588859 -9.303772  0.1260527 0.0581141
FT-CBT    4.797566 -0.534965 10.1300969 0.0864030
FT-Cont   9.386425  3.941386 14.8314647 0.0002930

Levene's Test for Homogeneity of Variance (center = median)
      Df F value Pr(>F)
group  2  1.7671 0.1785
      69               

126.3 Assumptions

The One Way ANOVA test makes the following assumptions:

  • The residuals are (approximately) normally distributed.
  • The samples are independent.
  • The Variances of the populations are equal.
  • The responses for each group are independent.

126.4 Alternatives

In theory it is possible to use the alternatives of the Unpaired Two Sample t-Test which is applied to all pairwise combinations of groups. Of course, one should beware of the problems that are associated with applying multiple Hypothesis Tests which each induces a type I error.

The Kruskal-Wallis test (Chapter 127) is a non-parametric alternative to the One Way ANOVA. It is based on ranks rather than raw values and does not require the assumptions of normality or equal variances. The R module that is available through the menu “Hypotheses / Multivariate (pair-wise) Testing” (use the Boxplot tab) also features various types of One Way ANOVA methods (including the Kruskal-Wallis approach). An example can be found here (click on violin plot (between variance)).

Levene, Howard. 1960. “Robust Tests for Equality of Variances.” In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling, edited by Ingram Olkin, S. G. Ghurye, Wassily Hoeffding, William G. Madow, and Henry B. Mann, 278–92. Stanford, CA: Stanford University Press.
Tukey, John W. 1949. “Comparing Individual Means in the Analysis of Variance.” Biometrics 5 (2): 99–114. https://doi.org/10.2307/3001913.
125  Kolmogorov-Smirnov Test
127  Kruskal-Wallis Test

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences