• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Lognormal
    • Pareto
    • Inverse Gamma

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Regression Models
  2. 132  Conditional Inference Trees
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution

    • 44  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 45  Types of Data
    • 46  Datasheets

    • 47  Frequency Plot (Bar Plot)
    • 48  Frequency Table
    • 49  Contingency Table
    • 50  Binomial Classification Metrics
    • 51  Confusion Matrix
    • 52  ROC Analysis

    • 53  Stem-and-Leaf Plot
    • 54  Histogram
    • 55  Data Quality Forensics
    • 56  Quantiles
    • 57  Central Tendency
    • 58  Variability
    • 59  Skewness & Kurtosis
    • 60  Concentration
    • 61  Notched Boxplot
    • 62  Scatterplot
    • 63  Pearson Correlation
    • 64  Rank Correlation
    • 65  Partial Pearson Correlation
    • 66  Simple Linear Regression
    • 67  Moments
    • 68  Quantile-Quantile Plot (QQ Plot)
    • 69  Normal Probability Plot
    • 70  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 71  Box-Cox Normality Plot
    • 72  Kernel Density Estimation
    • 73  Bivariate Kernel Density Plot
    • 74  Conditional EDA: Panel Diagnostics
    • 75  Bootstrap Plot (Central Tendency)
    • 76  Survey Scores Rank Order Comparison
    • 77  Cronbach Alpha

    • 78  Equi-distant Time Series
    • 79  Time Series Plot (Run Sequence Plot)
    • 80  Mean Plot
    • 81  Blocked Bootstrap Plot (Central Tendency)
    • 82  Standard Deviation-Mean Plot
    • 83  Variance Reduction Matrix
    • 84  (Partial) Autocorrelation Function
    • 85  Periodogram & Cumulative Periodogram

    • 86  Problems
  • Hypothesis Testing
    • 87  Normal Distributions revisited
    • 88  The Population
    • 89  The Sample
    • 90  The One-Sided Hypothesis Test
    • 91  The Two-Sided Hypothesis Test
    • 92  When to use a one-sided or two-sided test?
    • 93  What if \(\sigma\) is unknown?
    • 94  The Central Limit Theorem (revisited)
    • 95  Statistical Test of the Population Mean with known Variance
    • 96  Statistical Test of the Population Mean with unknown Variance
    • 97  Statistical Test of the Variance
    • 98  Statistical Test of the Population Proportion
    • 99  Statistical Test of the Standard Deviation \(\sigma\)
    • 100  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 101  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 102  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 103  Hypothesis Testing for Research Purposes
    • 104  Decision Thresholds, Alpha, and Confidence Levels
    • 105  Bayesian Inference for Decision-Making
    • 106  One Sample t-Test
    • 107  Skewness & Kurtosis Tests
    • 108  Paired Two Sample t-Test
    • 109  Wilcoxon Signed-Rank Test
    • 110  Unpaired Two Sample t-Test
    • 111  Unpaired Two Sample Welch Test
    • 112  Two One-Sided Tests (TOST) for Equivalence
    • 113  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 114  Bayesian Two Sample Test
    • 115  Median Test based on Notched Boxplots
    • 116  Chi-Squared Tests for Count Data
    • 117  Kolmogorov-Smirnov Test
    • 118  One Way Analysis of Variance (1-way ANOVA)
    • 119  Kruskal-Wallis Test
    • 120  Two Way Analysis of Variance (2-way ANOVA)
    • 121  Repeated Measures ANOVA
    • 122  Friedman Test
    • 123  Testing Correlations
    • 124  A Note on Causality

    • 125  Problems
  • Regression Models
    • 126  Simple Linear Regression Model (SLRM)
    • 127  Multiple Linear Regression Model (MLRM)
    • 128  Logistic Regression
    • 129  Generalized Linear Models
    • 130  Multinomial and Ordinal Logistic Regression
    • 131  Cox Proportional Hazards Regression
    • 132  Conditional Inference Trees
    • 133  Leaf Diagnostics for Conditional Inference Trees
    • 134  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 135  Problems
  • Introduction to Time Series Analysis
    • 136  Case: the Market of Health and Personal Care Products
    • 137  Decomposition of Time Series
    • 138  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 139  Introduction to Box-Jenkins Analysis
    • 140  Theoretical Concepts
    • 141  Stationarity
    • 142  Identifying ARMA parameters
    • 143  Estimating ARMA Parameters and Residual Diagnostics
    • 144  Forecasting with ARIMA models
    • 145  Intervention Analysis
    • 146  Cross-Correlation Function
    • 147  Transfer Function Noise Models
    • 148  General-to-Specific Modeling
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 132.1 Definition
  • 132.2 Algorithm
  • 132.3 Statistical Testing Framework
  • 132.4 Control Parameters
  • 132.5 Comparison with CART
  • 132.6 R Module
    • 132.6.1 Public website
    • 132.6.2 RFC
    • 132.6.3 R Code
  • 132.7 Predictions
  • 132.8 Example: Medical Diagnosis
  • 132.9 ROC Analysis Integration
  • 132.10 Regression Trees
  • 132.11 Variable Importance
  • 132.12 Pros & Cons
    • 132.12.1 Pros
    • 132.12.2 Cons
  • 132.13 Task
DRAFT This draft is under development — DO NOT CITE OR SHARE.
  1. Regression Models
  2. 132  Conditional Inference Trees

132  Conditional Inference Trees

132.1 Definition

A conditional inference tree is a non-parametric classification and regression method that uses a statistical testing framework to select variables and determine split points. Unlike traditional decision trees (CART; Breiman et al. (1984)), conditional inference trees use permutation tests to evaluate the association between each predictor and the response variable, avoiding the variable selection bias that occurs with greedy algorithms.

The method was introduced by Hothorn, Hornik, and Zeileis (Hothorn, Hornik, and Zeileis (2006)) and is implemented in the party package in R via the ctree() function.

132.2 Algorithm

The algorithm proceeds as follows:

  1. Test for independence: For each predictor variable, test the null hypothesis of independence between the predictor and the response using permutation tests based on the conditional distribution of linear statistics.

  2. Variable selection: Select the predictor with the strongest association to the response (smallest p-value). If the smallest p-value is above the significance threshold, stop splitting.

  3. Split point selection: For the selected predictor, find the split point that maximizes a two-sample statistic.

  4. Recursion: Repeat the process for each child node until a stopping criterion is met.

132.3 Statistical Testing Framework

The key difference from traditional decision trees is the use of conditional inference. For each predictor \(X_j\), the algorithm computes a test statistic based on influence functions and a linear statistic:

\[ T_j = \text{vec}\left(\sum_{i=1}^n w_i g_j(X_{ji}) h(Y_i, (Y_1, \ldots, Y_n))^T\right) \]

where:

  • \(w_i\) are case weights
  • \(g_j\) is a transformation of the predictor (e.g., indicator functions for factors)
  • \(h\) is an influence function for the response

The p-value is computed using the permutation distribution of this statistic, conditional on the observed data.

132.4 Control Parameters

The ctree_control() function specifies the stopping criteria:

Table 132.1: Control parameters for conditional inference trees
Parameter Description Default
mincriterion 1 - p-value threshold for splitting 0.95
minsplit Minimum observations for attempting a split 20
minbucket Minimum observations in terminal nodes 7
maxdepth Maximum tree depth Inf

A higher mincriterion requires stronger evidence to split (more conservative tree). Setting mincriterion = 0.95 corresponds to a significance level of \(\alpha = 0.05\).

132.5 Comparison with CART

Table 132.2: Comparison of CART and Conditional Inference Trees
Aspect CART Conditional Inference Tree
Variable selection Greedy (maximizes impurity reduction) Statistical testing
Selection bias Favors variables with many splits Unbiased
Pruning Required (post-hoc) Built-in via significance testing
Split criterion Gini, entropy, or variance Permutation test p-values
Interpretability High High

132.6 R Module

132.6.1 Public website

Conditional Inference Trees are available on the public website:

  • https://compute.wessa.net/rwasp_ctree.wasp

132.6.2 RFC

The Conditional Inference Tree module is available in RFC under the menu “Models / Conditional Inference Tree”.

An interactive model-building application that includes conditional inference trees alongside other classification methods (logistic regression, naive Bayes) is available under “Models / Manual Model Building”. This application allows users to compare model performance using ROC curves (Chapter 52) and confusion matrices (Chapter 51).

132.6.3 R Code

The following code demonstrates fitting a conditional inference tree for classification:

library(party)

# Create example data: customer churn prediction
set.seed(101)
n <- 400

# Predictors with clear effects on churn
months_subscribed <- sample(1:48, n, replace = TRUE)
monthly_charges <- runif(n, 20, 120)
support_calls <- rpois(n, lambda = 2)
contract_type <- factor(sample(c("Monthly", "Annual", "Two-Year"), n,
                               replace = TRUE, prob = c(0.5, 0.3, 0.2)))

# Churn probability: higher with short subscription, high charges, many support calls, monthly contract
log_odds <- -2 +
  ifelse(months_subscribed < 12, 1.5, ifelse(months_subscribed < 24, 0.5, -0.5)) +
  ifelse(monthly_charges > 80, 1.2, ifelse(monthly_charges > 50, 0.3, -0.5)) +
  0.3 * support_calls +
  ifelse(contract_type == "Monthly", 1, ifelse(contract_type == "Annual", 0, -1))

churn <- factor(ifelse(runif(n) < plogis(log_odds), "Yes", "No"))

df <- data.frame(months_subscribed = months_subscribed,
                 monthly_charges = monthly_charges,
                 support_calls = support_calls,
                 contract_type = contract_type,
                 churn = churn)

# Fit conditional inference tree
tree <- ctree(churn ~ months_subscribed + monthly_charges + support_calls + contract_type,
              data = df,
              controls = ctree_control(mincriterion = 0.95, minsplit = 20))
print(tree)

     Conditional inference tree with 7 terminal nodes

Response:  churn 
Inputs:  months_subscribed, monthly_charges, support_calls, contract_type 
Number of observations:  400 

1) months_subscribed <= 16; criterion = 1, statistic = 31.802
  2) monthly_charges <= 51.97214; criterion = 0.998, statistic = 11.857
    3)*  weights = 50 
  2) monthly_charges > 51.97214
    4)*  weights = 91 
1) months_subscribed > 16
  5) monthly_charges <= 97.80051; criterion = 1, statistic = 25.207
    6) contract_type == {Monthly}; criterion = 0.996, statistic = 14.079
      7) support_calls <= 1; criterion = 0.992, statistic = 9.546
        8)*  weights = 39 
      7) support_calls > 1
        9)*  weights = 56 
    6) contract_type == {Annual, Two-Year}
      10)*  weights = 109 
  5) monthly_charges > 97.80051
    11) contract_type == {Two-Year}; criterion = 0.999, statistic = 15.85
      12)*  weights = 10 
    11) contract_type == {Annual, Monthly}
      13)*  weights = 45 

The tree can be visualized:

Code
plot(tree)
Figure 132.1: Conditional inference tree for classification

132.7 Predictions

Predictions from a conditional inference tree include both class predictions and class probabilities:

# Class predictions
head(predict(tree))

# Class probabilities
probs <- do.call(rbind, treeresponse(tree))
colnames(probs) <- levels(df$churn)
head(probs)
[1] Yes No  No  No  No  No 
Levels: No Yes
            No       Yes
[1,] 0.2857143 0.7142857
[2,] 0.8990826 0.1009174
[3,] 0.8974359 0.1025641
[4,] 0.8990826 0.1009174
[5,] 0.8990826 0.1009174
[6,] 0.8990826 0.1009174

The predicted probabilities can be used for ROC analysis (Chapter 52) to evaluate classifier performance and determine optimal classification thresholds.

132.8 Example: Medical Diagnosis

A hospital wants to predict disease presence based on patient characteristics:

# Simulated medical data
set.seed(303)
n <- 600

# Patient characteristics
age <- sample(25:75, n, replace = TRUE)
bmi <- rnorm(n, mean = 27, sd = 6)
blood_pressure <- rnorm(n, mean = 125, sd = 20)
glucose <- rnorm(n, mean = 100, sd = 30)
smoking <- factor(sample(c("Never", "Former", "Current"), n,
                         replace = TRUE, prob = c(0.4, 0.35, 0.25)))

# Disease probability with strong, clear effects
log_odds <- -3 +
  ifelse(age > 55, 2.0, ifelse(age > 45, 0.8, -0.8)) +
  ifelse(bmi > 30, 1.5, ifelse(bmi > 27, 0.5, -0.5)) +
  ifelse(blood_pressure > 140, 1.5, ifelse(blood_pressure > 125, 0.5, -0.3)) +
  ifelse(glucose > 125, 1.2, ifelse(glucose > 100, 0.3, -0.3)) +
  ifelse(smoking == "Current", 1.8, ifelse(smoking == "Former", 0.6, -0.3))

disease <- factor(ifelse(runif(n) < plogis(log_odds), "Yes", "No"))

medical_data <- data.frame(age = age, bmi = bmi, blood_pressure = blood_pressure,
                           glucose = glucose, smoking = smoking,
                           disease = disease)

# Split into train and test
train_idx <- sample(1:n, 0.7 * n)
train <- medical_data[train_idx, ]
test <- medical_data[-train_idx, ]

# Fit tree
disease_tree <- ctree(disease ~ age + bmi + blood_pressure + glucose + smoking,
                      data = train,
                      controls = ctree_control(mincriterion = 0.95, minsplit = 20))

# Predictions on test set
pred_class <- predict(disease_tree, newdata = test)
pred_probs <- do.call(rbind, treeresponse(disease_tree, newdata = test))
colnames(pred_probs) <- levels(train$disease)

# Confusion matrix
table(Actual = test$disease, Predicted = pred_class)

# Accuracy
mean(pred_class == test$disease)
      Predicted
Actual  No Yes
   No  103   7
   Yes  38  32
[1] 0.75
Code
plot(disease_tree)
Figure 132.2: Conditional inference tree for disease prediction

132.9 ROC Analysis Integration

The predicted probabilities from a conditional inference tree can be used to construct ROC curves (Chapter 52) for evaluating and comparing classifier performance:

# Get probability of disease (positive class)
prob_disease <- pred_probs[, "Yes"]

# Manual ROC curve computation
thresholds <- sort(unique(c(0, prob_disease, 1)))
actual <- as.integer(test$disease == "Yes")

roc_points <- data.frame(threshold = numeric(), FPR = numeric(), TPR = numeric())

for (t in thresholds) {
  predicted <- as.integer(prob_disease >= t)
  TP <- sum(predicted == 1 & actual == 1)
  TN <- sum(predicted == 0 & actual == 0)
  FP <- sum(predicted == 1 & actual == 0)
  FN <- sum(predicted == 0 & actual == 1)
  TPR <- ifelse((TP + FN) > 0, TP / (TP + FN), 0)
  FPR <- ifelse((TN + FP) > 0, FP / (TN + FP), 0)
  roc_points <- rbind(roc_points, data.frame(threshold = t, FPR = FPR, TPR = TPR))
}

roc_points <- roc_points[order(roc_points$FPR), ]

# AUC computation (trapezoidal rule)
auc <- 0
for (i in 1:(nrow(roc_points) - 1)) {
  auc <- auc + (roc_points$FPR[i+1] - roc_points$FPR[i]) *
    (roc_points$TPR[i] + roc_points$TPR[i+1]) / 2
}

cat("AUC:", round(auc, 3), "\n")
AUC: 0.81 
Code
plot(roc_points$FPR, roc_points$TPR, type = "l", lwd = 2, col = "blue",
     xlab = "False Positive Rate (1 - Specificity)",
     ylab = "True Positive Rate (Sensitivity)",
     main = "ROC Curve - Conditional Inference Tree")
abline(0, 1, lty = 2, col = "gray")
legend("bottomright", legend = paste("AUC =", round(abs(auc), 3)),
       col = "blue", lwd = 2)
Figure 132.3: ROC curve for conditional inference tree classifier

132.10 Regression Trees

Conditional inference trees can also be used for regression (continuous response):

# Regression example
set.seed(42)
n <- 200
x1 <- runif(n, 0, 10)
x2 <- runif(n, 0, 10)
y <- 2 + 3 * (x1 > 5) + 2 * (x2 > 5) + rnorm(n, sd = 0.5)

reg_data <- data.frame(x1 = x1, x2 = x2, y = y)
reg_tree <- ctree(y ~ x1 + x2, data = reg_data)

# Predictions
pred <- predict(reg_tree)
cat("RMSE:", sqrt(mean((y - pred)^2)), "\n")
cat("R-squared:", cor(y, pred)^2, "\n")
RMSE: 0.4548386 
R-squared: 0.9394194 
Code
plot(reg_tree)
Figure 132.4: Conditional inference tree for regression

132.11 Variable Importance

While conditional inference trees do not have a built-in variable importance measure like random forests, the variables that appear higher in the tree (closer to the root) are generally more important. The number of times a variable is used for splitting across the tree can also indicate importance.

132.12 Pros & Cons

132.12.1 Pros

Conditional inference trees have the following advantages:

  • Unbiased variable selection due to statistical testing framework.
  • No need for post-hoc pruning; tree size is controlled by significance threshold.
  • Handles mixed predictor types (numeric and categorical) naturally.
  • Provides interpretable decision rules.
  • Less prone to overfitting compared to CART when using appropriate mincriterion.

132.12.2 Cons

Conditional inference trees have the following disadvantages:

  • Computationally more intensive than CART due to permutation tests.
  • May produce smaller trees than CART, potentially underfitting in some cases.
  • The permutation test framework assumes exchangeability under the null hypothesis.
  • Variable importance is less straightforward than in ensemble methods.
  • Cannot extrapolate beyond the range of training data.

132.13 Task

  1. Using the iris dataset, fit a conditional inference tree to classify species based on sepal and petal measurements. Visualize the tree and interpret the decision rules.

  2. Compare the performance of a conditional inference tree with logistic regression (Chapter 128) on a binary classification problem. Use ROC curves and AUC (Section 52.4) to evaluate both models.

  3. Experiment with different values of mincriterion (0.90, 0.95, 0.99) and observe how tree complexity changes. Discuss the trade-off between tree size and prediction accuracy.

  4. Fit a regression tree to predict mpg in the mtcars dataset. Compare the predictions with those from a linear regression model (Chapter 126).

Breiman, Leo, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. 1984. Classification and Regression Trees. Belmont, CA: Wadsworth International Group.
Hothorn, T., K. Hornik, and A. Zeileis. 2006. “Unbiased Recursive Partitioning: A Conditional Inference Framework.” Journal of Computational and Graphical Statistics 15: 651–74.
131  Cox Proportional Hazards Regression
133  Leaf Diagnostics for Conditional Inference Trees

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences