• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Lognormal
    • Pareto
    • Inverse Gamma

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Hypothesis Testing
  2. 120  Two Way Analysis of Variance (2-way ANOVA)
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution

    • 44  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 45  Types of Data
    • 46  Datasheets

    • 47  Frequency Plot (Bar Plot)
    • 48  Frequency Table
    • 49  Contingency Table
    • 50  Binomial Classification Metrics
    • 51  Confusion Matrix
    • 52  ROC Analysis

    • 53  Stem-and-Leaf Plot
    • 54  Histogram
    • 55  Data Quality Forensics
    • 56  Quantiles
    • 57  Central Tendency
    • 58  Variability
    • 59  Skewness & Kurtosis
    • 60  Concentration
    • 61  Notched Boxplot
    • 62  Scatterplot
    • 63  Pearson Correlation
    • 64  Rank Correlation
    • 65  Partial Pearson Correlation
    • 66  Simple Linear Regression
    • 67  Moments
    • 68  Quantile-Quantile Plot (QQ Plot)
    • 69  Normal Probability Plot
    • 70  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 71  Box-Cox Normality Plot
    • 72  Kernel Density Estimation
    • 73  Bivariate Kernel Density Plot
    • 74  Conditional EDA: Panel Diagnostics
    • 75  Bootstrap Plot (Central Tendency)
    • 76  Survey Scores Rank Order Comparison
    • 77  Cronbach Alpha

    • 78  Equi-distant Time Series
    • 79  Time Series Plot (Run Sequence Plot)
    • 80  Mean Plot
    • 81  Blocked Bootstrap Plot (Central Tendency)
    • 82  Standard Deviation-Mean Plot
    • 83  Variance Reduction Matrix
    • 84  (Partial) Autocorrelation Function
    • 85  Periodogram & Cumulative Periodogram

    • 86  Problems
  • Hypothesis Testing
    • 87  Normal Distributions revisited
    • 88  The Population
    • 89  The Sample
    • 90  The One-Sided Hypothesis Test
    • 91  The Two-Sided Hypothesis Test
    • 92  When to use a one-sided or two-sided test?
    • 93  What if \(\sigma\) is unknown?
    • 94  The Central Limit Theorem (revisited)
    • 95  Statistical Test of the Population Mean with known Variance
    • 96  Statistical Test of the Population Mean with unknown Variance
    • 97  Statistical Test of the Variance
    • 98  Statistical Test of the Population Proportion
    • 99  Statistical Test of the Standard Deviation \(\sigma\)
    • 100  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 101  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 102  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 103  Hypothesis Testing for Research Purposes
    • 104  Decision Thresholds, Alpha, and Confidence Levels
    • 105  Bayesian Inference for Decision-Making
    • 106  One Sample t-Test
    • 107  Skewness & Kurtosis Tests
    • 108  Paired Two Sample t-Test
    • 109  Wilcoxon Signed-Rank Test
    • 110  Unpaired Two Sample t-Test
    • 111  Unpaired Two Sample Welch Test
    • 112  Two One-Sided Tests (TOST) for Equivalence
    • 113  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 114  Bayesian Two Sample Test
    • 115  Median Test based on Notched Boxplots
    • 116  Chi-Squared Tests for Count Data
    • 117  Kolmogorov-Smirnov Test
    • 118  One Way Analysis of Variance (1-way ANOVA)
    • 119  Kruskal-Wallis Test
    • 120  Two Way Analysis of Variance (2-way ANOVA)
    • 121  Repeated Measures ANOVA
    • 122  Friedman Test
    • 123  Testing Correlations
    • 124  A Note on Causality

    • 125  Problems
  • Regression Models
    • 126  Simple Linear Regression Model (SLRM)
    • 127  Multiple Linear Regression Model (MLRM)
    • 128  Logistic Regression
    • 129  Generalized Linear Models
    • 130  Multinomial and Ordinal Logistic Regression
    • 131  Cox Proportional Hazards Regression
    • 132  Conditional Inference Trees
    • 133  Leaf Diagnostics for Conditional Inference Trees
    • 134  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 135  Problems
  • Introduction to Time Series Analysis
    • 136  Case: the Market of Health and Personal Care Products
    • 137  Decomposition of Time Series
    • 138  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 139  Introduction to Box-Jenkins Analysis
    • 140  Theoretical Concepts
    • 141  Stationarity
    • 142  Identifying ARMA parameters
    • 143  Estimating ARMA Parameters and Residual Diagnostics
    • 144  Forecasting with ARIMA models
    • 145  Intervention Analysis
    • 146  Cross-Correlation Function
    • 147  Transfer Function Noise Models
    • 148  General-to-Specific Modeling
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 120.1 Analysis based on p-values and confidence intervals
    • 120.1.1 Software
    • 120.1.2 Data & Parameters
    • 120.1.3 Output
DRAFT This draft is under development — DO NOT CITE OR SHARE.
  1. Hypothesis Testing
  2. 120  Two Way Analysis of Variance (2-way ANOVA)

120  Two Way Analysis of Variance (2-way ANOVA)

Two Way ANOVA is simply an extension of One Way ANOVA from Chapter 118. Instead of one, there are two categorical variables which determine the groups (there are two “explanatory factors”). In addition, it is possible to take into account the interaction effects that might exist between the factors that are considered.

120.1 Analysis based on p-values and confidence intervals

120.1.1 Software

The Two Way ANOVA R Module can be found on the publicly available website:

  • https://compute.wessa.net/rwasp_Two%20Factor%20ANOVA.wasp

The same R Module is also available in RFC under the “Hypotheses / Empirical Tests” menu item.

120.1.2 Data & Parameters

This R module contains the following fields:

  • Data X: a multivariate dataset containing quantitative data
  • Names of X columns: a space delimited list of names (one name for each column)
  • Response Variable: a positive integer value of the column in the multivariate dataset which corresponds to the response/endogenous variable (i.e. the variable we wish to explain or predict)
  • Factor Variable 1: a positive integer value of the column in the multivariate dataset which corresponds to the first explanatory variable (i.e. a qualitative variable containing the single-quoted group labels)
  • Factor Variable 2: a positive integer value of the column in the multivariate dataset which corresponds to the second explanatory variable (i.e. a qualitative variable containing the single-quoted group labels)
  • Include Intercept Term. This parameter can be set to the following values:
    • FALSE
    • TRUE

120.1.3 Output

Consider the problem of measuring the effect of a treatment with two different categories (“A” and “B”) on a response variable within an experimental setting, while simultaneously taking into account possible gender-related differences. This naming convention implies that the word “Treatment” is to be interpreted as a synonym for “explanatory variable” or “exogenous variable” (it does not necessarily have be related to drugs or medical treatments).

We wish to examine the following effects on the response variable:

  • the pure effect of Treatment A
  • the pure effect of Treatment B (i.e. gender)
  • the combined effect of Treatment A and B (e.g. do females benefit more from Treatment A than males?)

The results from the Two Way ANOVA analysis are shown below:

Interactive Shiny app (click to load).
Open in new tab

The ANOVA Table is used to assess the Null and Alternative Hypothesis and is based on the F-Test just like in the One Way ANOVA case. The row which corresponds to “treatment” shows an F-statistic of 95.113 with \(p \simeq 1.668e-10\). This is a marginal (averaged-over-gender) treatment effect. Because the interaction is significant (see below), this result should not be interpreted as a standalone one-way ANOVA conclusion.

The next row corresponds to the gender effect. This is also a marginal (averaged-over-treatment) main effect. Since \(p \simeq\) 5.189e-15 we reject the Null Hypothesis for the gender main effect, but its interpretation must be qualified by the significant interaction.

The row which corresponds to “treatment:gender” is related to the “combined” effect of the treatment and gender variables. Since \(p \simeq\) 9.582e-07 we reject the Null Hypothesis and conclude that the effect of the treatment for males is significantly different from the effect of the treatment for females. In other words, the effect of the treatment depends on gender, so simple effects (e.g. treatment differences within each gender) should be interpreted first.

The results shown in the Table called “Tukey multiple comparisons of means” (Tukey 1949) provide much more detailed information. We briefly discuss the most interesting rows of the table:

  • B-A: the difference between both groups is 0.976 which implies that subjects in group B have a higher response score than subjects in group A. Since \(0 \notin [0.771, 1.181]\) (and \(p \simeq 0 < \alpha\)) we reject the Null Hypothesis and conclude that the difference is significant. The mean response for treatment B is significantly higher than for treatment A.

  • M-F: the difference between both groups is -1.514 which implies that females have a higher response score than males. Since \(0 \notin [-1.719, -1.309]\) (and \(p \simeq 0 < \alpha\)) we reject the Null Hypothesis and conclude that the difference is significant. The mean response for females is significantly higher than for males.

  • B:F-A:F: the difference between both groups is 1.6 which implies that females in group B have a higher response score than females in group A. Since \(0 \notin [1.214, 1.986]\) (and \(p \simeq 0 < \alpha\)) we reject the Null Hypothesis and conclude that the difference is significant. The mean response for females who receive treatment B is significantly higher than for females receiving treatment A.

  • A:M-A:F: the difference between both groups is -0.89 which implies that females in group A have a higher response score than males in group A. Since \(0 \notin [-1.276, -0.504]\) (and \(p \simeq 0 < \alpha\)) we reject the Null Hypothesis and conclude that the difference is significant. The mean response for females who receive treatment A is significantly higher than for males receiving treatment A.

  • B:M-B:F: the difference between both groups is -2.139 which implies that females in group B have a higher response score than males in group B. Since \(0 \notin [-2.525, -1.752]\) (and \(p \simeq 0 < \alpha\)) we reject the Null Hypothesis and conclude that the difference is significant. The mean response for females who receive treatment B is significantly higher than for males receiving treatment B.

  • B:M-A:M: the difference between both groups is 0.351 which implies that males in group B have a higher response score than males in group A. Since \(0 \in [-0.035, 0.738]\) (and \(p > \alpha\)) we fail to reject the Null Hypothesis and conclude that the difference is not significant. The mean response for males who receive treatment B is not significantly different from the response for males receiving treatment A.

Just as was the case for the Unpaired Two Sample t-Test, the Two Way ANOVA test makes the assumption of equal Variances for each group. This can be assessed by the diagnostic Hypothesis Test called “Levene’s Test for Homogeneity of Variance” (Levene 1960) which is shown in last table of the output. The results show that the Null Hypothesis (i.e. Homogeneity of Variance) is rejected. Hence, the underlying assumption of the Two Way ANOVA test is not satisfied.

For reporting, include effect sizes for each main effect and interaction, e.g. partial eta-squared:

\[ \eta_p^2 = \frac{SS_{effect}}{SS_{effect} + SS_{error}}. \]

Theoretically speaking one might dismiss the results from this Two Way ANOVA procedure (due to the violation of an underlying assumption). In practice, however, the departure from homogeneity of Variance does not have important effects when the groups are well-balanced. In this case we have the same number of females and males in groups A and B (i.e. the four groups have equal size). In other words, we may still be able to use the results from this analysis because the biasing effect of unequal Variances only emerges when the differences in Variance are related to sample size.

To compute the Two Way Analysis of Variance (2-way ANOVA) on your local machine, the following script can be used in the R console.

Note: this local script is a generic template (mtcars) for reproducible syntax. The embedded app example above uses the dedicated interaction dataset and therefore has different numeric output.

library(car)
x = mtcars
par1 = 1 #Response : mpg
par2 = 2 #Factor : cyl
par3 = 10 #Factor : gear
par4 = TRUE #Include Intercept Term
ylab = 'Y Variable Name'
xlab = 'X Variable Name'
main = 'Title Goes Here'
x1<-as.numeric(x[,par1])
f1<-as.character(x[,par2])
f2 <- as.character(x[,par3])
xdf<-data.frame(x1,f1, f2)
names(xdf)<-c('Response', 'Treatment_A', 'Treatment_B')
if(par4 == FALSE) (lmxdf<-lm(Response ~ Treatment_A * Treatment_B -1, data = xdf) ) else (lmxdf<-lm(Response ~ Treatment_A * Treatment_B, data = xdf))
(aov.xdf<-aov(lmxdf))
(anova.xdf<-anova(lmxdf))
(V1<-colnames(x)[par1])
(V2<-colnames(x)[par2])
(V3<-colnames(x)[par3])
#Tukey Honest Significant Difference Comparisons
if(par4==T){
  thsd<-TukeyHSD(aov.xdf)
  names(thsd) <- c(V2, V3, paste(V2, ':', V3, sep=''))
  print(thsd)
} else {
  print('Must Include Intercept to use Tukey Test')
}
(leveneTest(lmxdf))

Call:
lm(formula = Response ~ Treatment_A * Treatment_B, data = xdf)

Coefficients:
              (Intercept)               Treatment_A6  
                   21.500                     -1.750  
             Treatment_A8               Treatment_B4  
                   -6.450                      5.425  
             Treatment_B5  Treatment_A6:Treatment_B4  
                    6.700                     -5.425  
Treatment_A8:Treatment_B4  Treatment_A6:Treatment_B5  
                       NA                     -6.750  
Treatment_A8:Treatment_B5  
                   -6.350  

Call:
   aov(formula = lmxdf)

Terms:
                Treatment_A Treatment_B Treatment_A:Treatment_B Residuals
Sum of Squares     824.7846      8.2519                 23.8907  269.1200
Deg. of Freedom           2           2                       3        24

Residual standard error: 3.348632
1 out of 9 effects not estimable
Estimated effects may be unbalanced
Analysis of Variance Table

Response: Response
                        Df Sum Sq Mean Sq F value    Pr(>F)    
Treatment_A              2 824.78  412.39 36.7770 4.916e-08 ***
Treatment_B              2   8.25    4.13  0.3679    0.6960    
Treatment_A:Treatment_B  3  23.89    7.96  0.7102    0.5554    
Residuals               24 269.12   11.21                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
[1] "mpg"
[1] "cyl"
[1] "gear"
  Tukey multiple comparisons of means
    95% family-wise confidence level

Fit: aov(formula = lmxdf)

$cyl
          diff       lwr        upr     p adj
6-4  -6.920779 -10.96399 -2.8775652 0.0007419
8-4 -11.563636 -14.93298 -8.1942913 0.0000000
8-6  -4.642857  -8.51394 -0.7717744 0.0166543

$gear
         diff       lwr      upr     p adj
4-3 0.5599134 -2.678867 3.798694 0.9027767
5-3 1.1092641 -3.209110 5.427638 0.7988777
5-4 0.5493506 -3.901927 5.000628 0.9490971

$`cyl:gear`
                 diff        lwr       upr     p adj
6:3-4:3 -1.750000e+00 -15.689992 12.189992 0.9999560
8:3-4:3 -6.450000e+00 -18.296715  5.396715 0.6507761
4:4-4:3  5.425000e+00  -6.647387 17.497387 0.8319910
6:4-4:3 -1.750000e+00 -14.475413 10.975413 0.9999122
8:4-4:3            NA         NA        NA        NA
4:5-4:3  6.700000e+00  -7.239992 20.639992 0.7778059
6:5-4:3 -1.800000e+00 -17.896516 14.296516 0.9999819
8:5-4:3 -6.100000e+00 -20.039992  7.839992 0.8505445
8:3-6:3 -4.700000e+00 -13.393112  3.993112 0.6587277
4:4-6:3  7.175000e+00  -1.823226 16.173226 0.1962607
6:4-6:3 -3.552714e-15  -9.857063  9.857063 1.0000000
8:4-6:3            NA         NA        NA        NA
4:5-6:3  8.450000e+00  -2.931956 19.831956 0.2698785
6:5-6:3 -5.000000e-02 -13.989992 13.889992 1.0000000
8:5-6:3 -4.350000e+00 -15.731956  7.031956 0.9220157
4:4-8:3  1.187500e+01   6.679872 17.070128 0.0000017
6:4-8:3  4.700000e+00  -1.871375 11.271375 0.3126067
8:4-8:3            NA         NA        NA        NA
4:5-8:3  1.315000e+01   4.456888 21.843112 0.0008236
6:5-8:3  4.650000e+00  -7.196715 16.496715 0.9107420
8:5-8:3  3.500000e-01  -8.343112  9.043112 1.0000000
6:4-4:4 -7.175000e+00 -14.144996 -0.205004 0.0402197
8:4-4:4            NA         NA        NA        NA
4:5-4:4  1.275000e+00  -7.723226 10.273226 0.9998900
6:5-4:4 -7.225000e+00 -19.297387  4.847387 0.5363530
8:5-4:4 -1.152500e+01 -20.523226 -2.526774 0.0055745
8:4-6:4            NA         NA        NA        NA
4:5-6:4  8.450000e+00  -1.407063 18.307063 0.1347959
6:5-6:4 -5.000000e-02 -12.775413 12.675413 1.0000000
8:5-6:4 -4.350000e+00 -14.207063  5.507063 0.8448091
4:5-8:4            NA         NA        NA        NA
6:5-8:4            NA         NA        NA        NA
8:5-8:4            NA         NA        NA        NA
6:5-4:5 -8.500000e+00 -22.439992  5.439992 0.5126836
8:5-4:5 -1.280000e+01 -24.181956 -1.418044 0.0194305
8:5-6:5 -4.300000e+00 -18.239992  9.639992 0.9763412

Levene's Test for Homogeneity of Variance (center = median)
      Df F value  Pr(>F)  
group  7  2.1926 0.07172 .
      24                  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Levene, Howard. 1960. “Robust Tests for Equality of Variances.” In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling, edited by Ingram Olkin, S. G. Ghurye, Wassily Hoeffding, William G. Madow, and Henry B. Mann, 278–92. Stanford, CA: Stanford University Press.
Tukey, John W. 1949. “Comparing Individual Means in the Analysis of Variance.” Biometrics 5 (2): 99–114. https://doi.org/10.2307/3001913.
119  Kruskal-Wallis Test
121  Repeated Measures ANOVA

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences