• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Regression Models
  2. 138  Multinomial and Ordinal Logistic Regression
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 138.1 Multinomial Logistic Regression
    • 138.1.1 When to Use
    • 138.1.2 The Model
    • 138.1.3 Interpretation of Coefficients
    • 138.1.4 R Code
    • 138.1.5 Worked Example: Predicted Probabilities
    • 138.1.6 Assumptions
  • 138.2 Ordinal Logistic Regression
    • 138.2.1 When to Use
    • 138.2.2 The Proportional Odds Model
    • 138.2.3 Interpretation of Coefficients
    • 138.2.4 R Code
    • 138.2.5 Testing the Proportional Odds Assumption
    • 138.2.6 Predicted Probabilities
    • 138.2.7 Assumptions
  • 138.3 Choosing Between Multinomial and Ordinal
  • 138.4 Task
  1. Regression Models
  2. 138  Multinomial and Ordinal Logistic Regression

138  Multinomial and Ordinal Logistic Regression

Logistic regression (Chapter 136) handles binary outcomes (two categories). When the outcome variable has three or more categories, we need extensions: multinomial logistic regression for unordered categories and ordinal logistic regression for ordered categories.

138.1 Multinomial Logistic Regression

138.1.1 When to Use

Multinomial logistic regression is used when the outcome variable has three or more unordered categories. Examples include:

  • Mode of transportation (car, bus, bicycle, walking)
  • Political party preference (Democrat, Republican, Independent)
  • Disease type (Type A, Type B, Type C)
  • Job sector (industry, services, government, education)

138.1.2 The Model

Multinomial logistic regression generalizes binary logistic regression by modeling the log-odds of each category relative to a reference category. For an outcome with \(J\) categories, the model estimates \(J - 1\) sets of coefficients.

If category 1 is the reference category, then for each category \(j = 2, ..., J\):

\[ \log\left(\frac{P(Y = j | X)}{P(Y = 1 | X)}\right) = \beta_{j0} + \beta_{j1} X_1 + \beta_{j2} X_2 + ... + \beta_{jk} X_k \]

The probability of each category is then:

\[ P(Y = j | X) = \frac{e^{\beta_{j0} + \beta_{j1} X_1 + ... + \beta_{jk} X_k}}{1 + \sum_{l=2}^{J} e^{\beta_{l0} + \beta_{l1} X_1 + ... + \beta_{lk} X_k}} \]

and the probability of the reference category is:

\[ P(Y = 1 | X) = \frac{1}{1 + \sum_{l=2}^{J} e^{\beta_{l0} + \beta_{l1} X_1 + ... + \beta_{lk} X_k}} \]

138.1.3 Interpretation of Coefficients

The coefficients \(\beta_{jk}\) represent the change in the log-odds of category \(j\) (relative to the reference category) for a one-unit increase in \(X_k\), holding other predictors constant.

Exponentiating gives the relative risk ratio (RRR):

\[ \text{RRR}_{jk} = e^{\beta_{jk}} \]

  • \(\text{RRR} > 1\): the predictor increases the likelihood of category \(j\) relative to the reference
  • \(\text{RRR} = 1\): no effect
  • \(\text{RRR} < 1\): the predictor decreases the likelihood of category \(j\) relative to the reference

Note: in multinomial-logit software this “RRR” terminology is standard, but it is a ratio of relative category probabilities (not a cohort-study risk ratio).

138.1.4 R Code

Multinomial logistic regression requires the nnet package:

library(nnet)

# Example: Predicting program choice (General, Academic, Vocational)
# based on socioeconomic status and writing score
set.seed(42)
n <- 300

# Simulate data
ses <- sample(c("low", "middle", "high"), n, replace = TRUE, prob = c(0.3, 0.5, 0.2))
write_score <- rnorm(n, mean = 52, sd = 10)

# True probabilities depend on SES and writing score
prog_probs <- sapply(1:n, function(i) {
  base_gen <- 0
  base_acad <- -1 + 0.5 * (ses[i] == "high") + 0.3 * (ses[i] == "middle") + 0.03 * write_score[i]
  base_voc <- 0.5 - 0.3 * (ses[i] == "high") - 0.02 * write_score[i]
  probs <- exp(c(base_gen, base_acad, base_voc))
  probs / sum(probs)
})

program <- apply(prog_probs, 2, function(p) sample(c("General", "Academic", "Vocational"),
                                                     1, prob = p))

df <- data.frame(
  program = factor(program, levels = c("General", "Academic", "Vocational")),
  ses = factor(ses, levels = c("low", "middle", "high")),
  write = write_score
)

cat("Distribution of program choice:\n")
print(table(df$program))
cat("\n")

# Fit multinomial logistic regression (General is reference)
multi_model <- multinom(program ~ ses + write, data = df, trace = FALSE)
summary(multi_model)
Distribution of program choice:

   General   Academic Vocational 
        73        186         41 

Call:
multinom(formula = program ~ ses + write, data = df, trace = FALSE)

Coefficients:
           (Intercept) sesmiddle   seshigh       write
Academic    -0.1568732 0.3973003 0.9481817  0.01410383
Vocational   0.7455979 0.5936469 0.7612506 -0.03492423

Std. Errors:
           (Intercept) sesmiddle   seshigh      write
Academic     0.8060183 0.3065083 0.4361388 0.01513903
Vocational   1.1142991 0.4474315 0.6197140 0.02172068

Residual Deviance: 534.343 
AIC: 550.343 
# Relative risk ratios with confidence intervals
cat("\nRelative Risk Ratios:\n")
print(exp(coef(multi_model)))

# Significance tests (Wald z-tests)
z_values <- summary(multi_model)$coefficients / summary(multi_model)$standard.errors
p_values <- 2 * (1 - pnorm(abs(z_values)))
cat("\np-values:\n")
print(round(p_values, 4))

Relative Risk Ratios:
           (Intercept) sesmiddle  seshigh     write
Academic     0.8548124  1.487803 2.581012 1.0142038
Vocational   2.1077012  1.810579 2.140952 0.9656786

p-values:
           (Intercept) sesmiddle seshigh  write
Academic        0.8457    0.1949  0.0297 0.3515
Vocational      0.5034    0.1846  0.2193 0.1079
# Overall fit checks
cat("\nModel fit statistics:\n")
cat("AIC:", AIC(multi_model), "\n")
cat("Log-likelihood:", as.numeric(logLik(multi_model)), "\n")

# Likelihood ratio test against intercept-only model
multi_null <- multinom(program ~ 1, data = df, trace = FALSE)
lr_stat <- 2 * (as.numeric(logLik(multi_model)) - as.numeric(logLik(multi_null)))
df_diff <- attr(logLik(multi_model), "df") - attr(logLik(multi_null), "df")
p_lr <- 1 - pchisq(lr_stat, df = df_diff)
cat("LRT statistic:", round(lr_stat, 3), "df:", df_diff, "p-value:", signif(p_lr, 4), "\n")

Model fit statistics:
AIC: 550.343 
Log-likelihood: -267.1715 
LRT statistic: 13.029 df: 6 p-value: 0.04258 

138.1.5 Worked Example: Predicted Probabilities

# Predict probabilities for a specific profile
new_data <- data.frame(
  ses = factor(c("low", "middle", "high"), levels = c("low", "middle", "high")),
  write = c(50, 50, 50)
)

cat("Predicted probabilities (writing score = 50):\n")
print(round(predict(multi_model, newdata = new_data, type = "probs"), 3))
Predicted probabilities (writing score = 50):
  General Academic Vocational
1   0.323    0.559      0.119
2   0.236    0.607      0.157
3   0.160    0.714      0.126

138.1.6 Assumptions

  • Independence of observations: Each observation is independent of the others
  • Independence of irrelevant alternatives (IIA): The odds of choosing between any two categories are independent of the other available categories. This is a strong assumption – for example, it implies that adding a new bus route should not change the relative odds of choosing a car vs. bicycle.
  • No perfect multicollinearity (identification): Predictors cannot be exact linear combinations of each other
  • Adequate sample size: At least 10 observations per predictor per category is a common rule of thumb

High (imperfect) multicollinearity is a practical estimation concern because it inflates standard errors and can destabilize coefficient estimates.

138.2 Ordinal Logistic Regression

138.2.1 When to Use

Ordinal logistic regression is used when the outcome variable has three or more ordered categories. Examples include:

  • Satisfaction rating (low, medium, high)
  • Disease severity (mild, moderate, severe)
  • Agreement scale (strongly disagree, disagree, neutral, agree, strongly agree)
  • Pain intensity (none, mild, moderate, severe)

Unlike multinomial logistic regression, ordinal logistic regression takes advantage of the natural ordering, leading to a more parsimonious and powerful model.

138.2.2 The Proportional Odds Model

The most common ordinal logistic regression model is the proportional odds (or cumulative logit) model (McCullagh 1980). For an ordered outcome with \(J\) categories, it models the cumulative probabilities:

\[ \log\left(\frac{P(Y \leq j | X)}{P(Y > j | X)}\right) = \alpha_j - (\beta_1 X_1 + \beta_2 X_2 + ... + \beta_k X_k) \]

for \(j = 1, 2, ..., J-1\).

Key features:

  • There are \(J - 1\) intercepts (\(\alpha_j\)), one for each cumulative split
  • There is one set of regression coefficients \(\beta_k\) shared across all splits (the proportional odds assumption)
  • The negative sign is a convention ensuring that positive \(\beta\) values indicate higher categories

138.2.3 Interpretation of Coefficients

The coefficients \(\beta_k\) represent the change in the log-odds of being in a higher category (or any higher category) for a one-unit increase in \(X_k\).

Exponentiating gives the cumulative odds ratio:

\[ \text{OR}_k = e^{\beta_k} \]

  • \(\text{OR} > 1\): the predictor increases the odds of being in a higher category
  • \(\text{OR} = 1\): no effect
  • \(\text{OR} < 1\): the predictor decreases the odds of being in a higher category

The proportional odds assumption means this odds ratio is the same regardless of which cumulative split we consider.

138.2.4 R Code

Ordinal logistic regression uses the polr function from the MASS package:

library(MASS)

# Example: Patient satisfaction as a function of age and treatment type
set.seed(42)
n <- 300

age <- rnorm(n, mean = 50, sd = 15)
treatment <- sample(c("Standard", "New"), n, replace = TRUE)

# Latent variable model
latent <- -1 + 0.02 * age + 0.8 * (treatment == "New") + rnorm(n)
satisfaction <- cut(latent, breaks = c(-Inf, -0.5, 0.5, Inf),
                    labels = c("Low", "Medium", "High"), ordered_result = TRUE)

df_ord <- data.frame(satisfaction = satisfaction, age = age, treatment = factor(treatment))

cat("Distribution of satisfaction:\n")
print(table(df_ord$satisfaction))
cat("\n")

# Fit ordinal logistic regression
ord_model <- polr(satisfaction ~ age + treatment, data = df_ord, Hess = TRUE)
summary(ord_model)
Distribution of satisfaction:

   Low Medium   High 
    70     94    136 

Call:
polr(formula = satisfaction ~ age + treatment, data = df_ord, 
    Hess = TRUE)

Coefficients:
                     Value Std. Error t value
age                0.04638   0.008269   5.608
treatmentStandard -1.42468   0.235671  -6.045

Intercepts:
            Value   Std. Error t value
Low|Medium   0.1768  0.4152     0.4259
Medium|High  1.8402  0.4294     4.2858

Residual Deviance: 568.0806 
AIC: 576.0806 
# Odds ratios
cat("\nCumulative Odds Ratios:\n")
print(exp(coef(ord_model)))

# p-values (not shown by default in polr)
coef_table <- coef(summary(ord_model))
p_values <- pnorm(abs(coef_table[, "t value"]), lower.tail = FALSE) * 2
coef_table <- cbind(coef_table, "p value" = p_values)
cat("\nCoefficient table with p-values:\n")
printCoefmat(coef_table, digits = 3)

Cumulative Odds Ratios:
              age treatmentStandard 
        1.0474676         0.2405862 

Coefficient table with p-values:
                     Value Std. Error  t value p value
age                0.04638    0.00827  5.60802    0.00
treatmentStandard -1.42468    0.23567 -6.04518    0.00
Low|Medium         0.17682    0.41517  0.42590    0.67
Medium|High        1.84021    0.42937  4.28584    0.00

138.2.5 Testing the Proportional Odds Assumption

The proportional odds assumption states that the effect of each predictor is the same across all cumulative splits. This can be tested using the Brant test (Brant 1990) (from the brant package) or by comparing the ordinal model to a multinomial model:

# Informal check: compare with separate binary logistic regressions
# Split 1: Low vs. (Medium, High)
df_ord$split1 <- as.numeric(df_ord$satisfaction != "Low")
# Split 2: (Low, Medium) vs. High
df_ord$split2 <- as.numeric(df_ord$satisfaction == "High")

cat("=== Split 1: Low vs. (Medium + High) ===\n")
split1_model <- glm(split1 ~ age + treatment, family = binomial, data = df_ord)
print(round(coef(split1_model), 4))

cat("\n=== Split 2: (Low + Medium) vs. High ===\n")
split2_model <- glm(split2 ~ age + treatment, family = binomial, data = df_ord)
print(round(coef(split2_model), 4))

cat("\n=== Ordinal model (should be similar to both) ===\n")
print(round(coef(ord_model), 4))
cat("\nIf coefficients are similar across splits, proportional odds holds.\n")
=== Split 1: Low vs. (Medium + High) ===
      (Intercept)               age treatmentStandard 
          -0.2580            0.0512           -1.6257 

=== Split 2: (Low + Medium) vs. High ===
      (Intercept)               age treatmentStandard 
          -1.6819            0.0425           -1.3299 

=== Ordinal model (should be similar to both) ===
              age treatmentStandard 
           0.0464           -1.4247 

If coefficients are similar across splits, proportional odds holds.

138.2.6 Predicted Probabilities

# Predicted probabilities for specific profiles
new_patients <- data.frame(
  age = c(30, 50, 70),
  treatment = factor(c("New", "New", "New"))
)

cat("Predicted probabilities for new treatment at different ages:\n")
print(round(predict(ord_model, newdata = new_patients, type = "probs"), 3))
Predicted probabilities for new treatment at different ages:
    Low Medium  High
1 0.229  0.381 0.390
2 0.105  0.278 0.617
3 0.044  0.152 0.803

138.2.7 Assumptions

  • Proportional odds: The effect of each predictor is constant across all cumulative splits. If violated, consider multinomial logistic regression or a generalized ordinal model (which relaxes proportional odds for selected predictors while preserving ordering).
  • Independence of observations: Each observation is independent of the others
  • No perfect multicollinearity (identification): Predictors cannot be exact linear combinations of each other
  • Adequate sample size: Sufficient observations in each category

High (imperfect) multicollinearity is a practical estimation concern because it inflates standard errors and can destabilize coefficient estimates.

138.3 Choosing Between Multinomial and Ordinal

Table 138.1: Multinomial vs. Ordinal logistic regression
Criterion Multinomial Ordinal
Category ordering Unordered Ordered
Number of parameters \((J-1) \times (k+1)\) \((J-1) + k\)
Power Less powerful (more parameters) More powerful (fewer parameters)
Key assumption IIA Proportional odds
Example Transport mode Satisfaction rating

Decision rule:

  1. If the outcome categories have no natural ordering → Multinomial
  2. If the outcome categories have a natural ordering → First fit Ordinal
  3. If the proportional odds assumption is violated → Fall back to Multinomial or use a generalized ordinal model

As a flexible alternative, conditional inference trees (Chapter 140) can handle both unordered and ordered categorical outcomes without requiring assumptions about proportional odds or independence of irrelevant alternatives.

138.4 Task

  1. Using a dataset of your choice, fit a multinomial logistic regression model with at least two predictors. Interpret the relative risk ratios for each category.

  2. Using ordered satisfaction data (or simulated data), fit an ordinal logistic regression model. Compute predicted probabilities for different predictor values and interpret the results.

  3. Test the proportional odds assumption by comparing the ordinal model coefficients with those from separate binary logistic regressions. Does the assumption hold?

  4. Compare the predictions from a multinomial logistic regression model with those from a conditional inference tree (Chapter 140) on the same data. Discuss the trade-offs.

Brant, Rollin. 1990. “Assessing Proportionality in the Proportional Odds Model for Ordinal Logistic Regression.” Biometrics 46 (4): 1171–78. https://doi.org/10.2307/2532457.
McCullagh, Peter. 1980. “Regression Models for Ordinal Data.” Journal of the Royal Statistical Society: Series B (Methodological) 42 (2): 109–27. https://doi.org/10.1111/j.2517-6161.1980.tb01109.x.
137  Generalized Linear Models
139  Cox Proportional Hazards Regression

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences