Logistic regression (Chapter 136) handles binary outcomes (two categories). When the outcome variable has three or more categories, we need extensions: multinomial logistic regression for unordered categories and ordinal logistic regression for ordered categories.
138.1 Multinomial Logistic Regression
138.1.1 When to Use
Multinomial logistic regression is used when the outcome variable has three or more unordered categories. Examples include:
Mode of transportation (car, bus, bicycle, walking)
Political party preference (Democrat, Republican, Independent)
Multinomial logistic regression generalizes binary logistic regression by modeling the log-odds of each category relative to a reference category. For an outcome with \(J\) categories, the model estimates \(J - 1\) sets of coefficients.
If category 1 is the reference category, then for each category \(j = 2, ..., J\):
The coefficients \(\beta_{jk}\) represent the change in the log-odds of category \(j\) (relative to the reference category) for a one-unit increase in \(X_k\), holding other predictors constant.
Exponentiating gives the relative risk ratio (RRR):
\[
\text{RRR}_{jk} = e^{\beta_{jk}}
\]
\(\text{RRR} > 1\): the predictor increases the likelihood of category \(j\) relative to the reference
\(\text{RRR} = 1\): no effect
\(\text{RRR} < 1\): the predictor decreases the likelihood of category \(j\) relative to the reference
Note: in multinomial-logit software this “RRR” terminology is standard, but it is a ratio of relative category probabilities (not a cohort-study risk ratio).
138.1.4 R Code
Multinomial logistic regression requires the nnet package:
library(nnet)# Example: Predicting program choice (General, Academic, Vocational)# based on socioeconomic status and writing scoreset.seed(42)n <-300# Simulate datases <-sample(c("low", "middle", "high"), n, replace =TRUE, prob =c(0.3, 0.5, 0.2))write_score <-rnorm(n, mean =52, sd =10)# True probabilities depend on SES and writing scoreprog_probs <-sapply(1:n, function(i) { base_gen <-0 base_acad <--1+0.5* (ses[i] =="high") +0.3* (ses[i] =="middle") +0.03* write_score[i] base_voc <-0.5-0.3* (ses[i] =="high") -0.02* write_score[i] probs <-exp(c(base_gen, base_acad, base_voc)) probs /sum(probs)})program <-apply(prog_probs, 2, function(p) sample(c("General", "Academic", "Vocational"),1, prob = p))df <-data.frame(program =factor(program, levels =c("General", "Academic", "Vocational")),ses =factor(ses, levels =c("low", "middle", "high")),write = write_score)cat("Distribution of program choice:\n")print(table(df$program))cat("\n")# Fit multinomial logistic regression (General is reference)multi_model <-multinom(program ~ ses + write, data = df, trace =FALSE)summary(multi_model)
Distribution of program choice:
General Academic Vocational
73 186 41
Call:
multinom(formula = program ~ ses + write, data = df, trace = FALSE)
Coefficients:
(Intercept) sesmiddle seshigh write
Academic -0.1568732 0.3973003 0.9481817 0.01410383
Vocational 0.7455979 0.5936469 0.7612506 -0.03492423
Std. Errors:
(Intercept) sesmiddle seshigh write
Academic 0.8060183 0.3065083 0.4361388 0.01513903
Vocational 1.1142991 0.4474315 0.6197140 0.02172068
Residual Deviance: 534.343
AIC: 550.343
Independence of observations: Each observation is independent of the others
Independence of irrelevant alternatives (IIA): The odds of choosing between any two categories are independent of the other available categories. This is a strong assumption – for example, it implies that adding a new bus route should not change the relative odds of choosing a car vs. bicycle.
No perfect multicollinearity (identification): Predictors cannot be exact linear combinations of each other
Adequate sample size: At least 10 observations per predictor per category is a common rule of thumb
High (imperfect) multicollinearity is a practical estimation concern because it inflates standard errors and can destabilize coefficient estimates.
138.2 Ordinal Logistic Regression
138.2.1 When to Use
Ordinal logistic regression is used when the outcome variable has three or more ordered categories. Examples include:
Unlike multinomial logistic regression, ordinal logistic regression takes advantage of the natural ordering, leading to a more parsimonious and powerful model.
138.2.2 The Proportional Odds Model
The most common ordinal logistic regression model is the proportional odds (or cumulative logit) model (McCullagh 1980). For an ordered outcome with \(J\) categories, it models the cumulative probabilities:
There are \(J - 1\) intercepts (\(\alpha_j\)), one for each cumulative split
There is one set of regression coefficients \(\beta_k\) shared across all splits (the proportional odds assumption)
The negative sign is a convention ensuring that positive \(\beta\) values indicate higher categories
138.2.3 Interpretation of Coefficients
The coefficients \(\beta_k\) represent the change in the log-odds of being in a higher category (or any higher category) for a one-unit increase in \(X_k\).
Exponentiating gives the cumulative odds ratio:
\[
\text{OR}_k = e^{\beta_k}
\]
\(\text{OR} > 1\): the predictor increases the odds of being in a higher category
\(\text{OR} = 1\): no effect
\(\text{OR} < 1\): the predictor decreases the odds of being in a higher category
The proportional odds assumption means this odds ratio is the same regardless of which cumulative split we consider.
138.2.4 R Code
Ordinal logistic regression uses the polr function from the MASS package:
library(MASS)# Example: Patient satisfaction as a function of age and treatment typeset.seed(42)n <-300age <-rnorm(n, mean =50, sd =15)treatment <-sample(c("Standard", "New"), n, replace =TRUE)# Latent variable modellatent <--1+0.02* age +0.8* (treatment =="New") +rnorm(n)satisfaction <-cut(latent, breaks =c(-Inf, -0.5, 0.5, Inf),labels =c("Low", "Medium", "High"), ordered_result =TRUE)df_ord <-data.frame(satisfaction = satisfaction, age = age, treatment =factor(treatment))cat("Distribution of satisfaction:\n")print(table(df_ord$satisfaction))cat("\n")# Fit ordinal logistic regressionord_model <-polr(satisfaction ~ age + treatment, data = df_ord, Hess =TRUE)summary(ord_model)
Distribution of satisfaction:
Low Medium High
70 94 136
Call:
polr(formula = satisfaction ~ age + treatment, data = df_ord,
Hess = TRUE)
Coefficients:
Value Std. Error t value
age 0.04638 0.008269 5.608
treatmentStandard -1.42468 0.235671 -6.045
Intercepts:
Value Std. Error t value
Low|Medium 0.1768 0.4152 0.4259
Medium|High 1.8402 0.4294 4.2858
Residual Deviance: 568.0806
AIC: 576.0806
# Odds ratioscat("\nCumulative Odds Ratios:\n")print(exp(coef(ord_model)))# p-values (not shown by default in polr)coef_table <-coef(summary(ord_model))p_values <-pnorm(abs(coef_table[, "t value"]), lower.tail =FALSE) *2coef_table <-cbind(coef_table, "p value"= p_values)cat("\nCoefficient table with p-values:\n")printCoefmat(coef_table, digits =3)
Cumulative Odds Ratios:
age treatmentStandard
1.0474676 0.2405862
Coefficient table with p-values:
Value Std. Error t value p value
age 0.04638 0.00827 5.60802 0.00
treatmentStandard -1.42468 0.23567 -6.04518 0.00
Low|Medium 0.17682 0.41517 0.42590 0.67
Medium|High 1.84021 0.42937 4.28584 0.00
138.2.5 Testing the Proportional Odds Assumption
The proportional odds assumption states that the effect of each predictor is the same across all cumulative splits. This can be tested using the Brant test (Brant 1990) (from the brant package) or by comparing the ordinal model to a multinomial model:
# Informal check: compare with separate binary logistic regressions# Split 1: Low vs. (Medium, High)df_ord$split1 <-as.numeric(df_ord$satisfaction !="Low")# Split 2: (Low, Medium) vs. Highdf_ord$split2 <-as.numeric(df_ord$satisfaction =="High")cat("=== Split 1: Low vs. (Medium + High) ===\n")split1_model <-glm(split1 ~ age + treatment, family = binomial, data = df_ord)print(round(coef(split1_model), 4))cat("\n=== Split 2: (Low + Medium) vs. High ===\n")split2_model <-glm(split2 ~ age + treatment, family = binomial, data = df_ord)print(round(coef(split2_model), 4))cat("\n=== Ordinal model (should be similar to both) ===\n")print(round(coef(ord_model), 4))cat("\nIf coefficients are similar across splits, proportional odds holds.\n")
=== Split 1: Low vs. (Medium + High) ===
(Intercept) age treatmentStandard
-0.2580 0.0512 -1.6257
=== Split 2: (Low + Medium) vs. High ===
(Intercept) age treatmentStandard
-1.6819 0.0425 -1.3299
=== Ordinal model (should be similar to both) ===
age treatmentStandard
0.0464 -1.4247
If coefficients are similar across splits, proportional odds holds.
138.2.6 Predicted Probabilities
# Predicted probabilities for specific profilesnew_patients <-data.frame(age =c(30, 50, 70),treatment =factor(c("New", "New", "New")))cat("Predicted probabilities for new treatment at different ages:\n")print(round(predict(ord_model, newdata = new_patients, type ="probs"), 3))
Predicted probabilities for new treatment at different ages:
Low Medium High
1 0.229 0.381 0.390
2 0.105 0.278 0.617
3 0.044 0.152 0.803
138.2.7 Assumptions
Proportional odds: The effect of each predictor is constant across all cumulative splits. If violated, consider multinomial logistic regression or a generalized ordinal model (which relaxes proportional odds for selected predictors while preserving ordering).
Independence of observations: Each observation is independent of the others
No perfect multicollinearity (identification): Predictors cannot be exact linear combinations of each other
Adequate sample size: Sufficient observations in each category
High (imperfect) multicollinearity is a practical estimation concern because it inflates standard errors and can destabilize coefficient estimates.
138.3 Choosing Between Multinomial and Ordinal
Table 138.1: Multinomial vs. Ordinal logistic regression
Criterion
Multinomial
Ordinal
Category ordering
Unordered
Ordered
Number of parameters
\((J-1) \times (k+1)\)
\((J-1) + k\)
Power
Less powerful (more parameters)
More powerful (fewer parameters)
Key assumption
IIA
Proportional odds
Example
Transport mode
Satisfaction rating
Decision rule:
If the outcome categories have no natural ordering → Multinomial
If the outcome categories have a natural ordering → First fit Ordinal
If the proportional odds assumption is violated → Fall back to Multinomial or use a generalized ordinal model
As a flexible alternative, conditional inference trees (Chapter 140) can handle both unordered and ordered categorical outcomes without requiring assumptions about proportional odds or independence of irrelevant alternatives.
138.4 Task
Using a dataset of your choice, fit a multinomial logistic regression model with at least two predictors. Interpret the relative risk ratios for each category.
Using ordered satisfaction data (or simulated data), fit an ordinal logistic regression model. Compute predicted probabilities for different predictor values and interpret the results.
Test the proportional odds assumption by comparing the ordinal model coefficients with those from separate binary logistic regressions. Does the assumption hold?
Compare the predictions from a multinomial logistic regression model with those from a conditional inference tree (Chapter 140) on the same data. Discuss the trade-offs.
Brant, Rollin. 1990. “Assessing Proportionality in the Proportional Odds Model for Ordinal Logistic Regression.”Biometrics 46 (4): 1171–78. https://doi.org/10.2307/2532457.