• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Probability Distributions
  2. 47  Noncentral t Distribution
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 47.1 Probability Density Function
  • 47.2 Purpose
  • 47.3 Distribution Function
  • 47.4 Moment Generating Function
  • 47.5 Expected Value
  • 47.6 Variance
  • 47.7 Median
  • 47.8 Mode
  • 47.9 Coefficient of Skewness
  • 47.10 Coefficient of Kurtosis
  • 47.11 Parameter Estimation
  • 47.12 R Module
    • 47.12.1 RFC
    • 47.12.2 Direct app link
    • 47.12.3 R Code
  • 47.13 Example
  • 47.14 Random Number Generator
  • 47.15 Property 1: Noncentrality and Power
  • 47.16 Property 2: Noncentrality Formulas for t-tests
  • 47.17 Property 3: Convergence to Normal
  • 47.18 Related Distributions 1: Central t as Special Case
  • 47.19 Related Distributions 2: Normal as Limiting Case
  • 47.20 Related Distributions 3: Chi-squared Connection
  • 47.21 Related Distributions 4: Link with the Noncentral F Distribution
  • 47.22 Related Distributions 5: Noncentral Chi-squared Connection
  1. Probability Distributions
  2. 47  Noncentral t Distribution

47  Noncentral t Distribution

The Noncentral t distribution is the key distribution behind statistical power analysis for t-tests. Whenever a researcher asks how many observations do I need? or what is the probability that my study will detect a real effect?, the answer relies on the Noncentral t distribution.

Formally, the random variate \(X\) defined for the range \(-\infty < X < +\infty\), is said to have a Noncentral t Distribution (i.e. \(X \sim \text{t}(n, \delta)\)) with degrees of freedom \(n > 0\) and noncentrality parameter \(\delta \in \mathbb{R}\).

Construction. If \(Z \sim \text{N}(\delta, 1)\) and \(V \sim \chi^2(n)\) are independent, then

\[ T = \frac{Z}{\sqrt{V/n}} \sim \text{t}(n, \delta) \]

When \(\delta = 0\), this reduces to the standard (central) Student t distribution (see Chapter 25). The noncentrality parameter \(\delta\) shifts the distribution away from zero, reflecting the presence of a true effect.

47.1 Probability Density Function

The PDF of the Noncentral t distribution involves a confluent hypergeometric function and does not have a simple closed form. It can be expressed as

\[ f(x) = \frac{n^{n/2} \, e^{-n\delta^2/(2(x^2+n))}}{B\!\left(\frac{1}{2}, \frac{n}{2}\right) (x^2+n)^{(n+1)/2}} \sum_{j=0}^{\infty} \frac{\Gamma\!\left(\frac{n+j+1}{2}\right)}{\Gamma\!\left(\frac{n}{2}\right) \, j!} \left(\frac{x\delta\sqrt{2}}{\sqrt{x^2+n}}\right)^j \]

In practice, R computes the density with dt(x, df, ncp).

The figure below shows the Noncentral t Probability Density Function for \(n = 10\) and several values of \(\delta\).

Code
par(mfrow = c(2, 2))
x <- seq(-6, 12, length = 1000)

plot(x, dt(x, df = 10, ncp = 0), type = "l", lwd = 2, col = "blue",
     xlab = "x", ylab = "f(x)", main = expression(paste(n == 10, ",  ", delta == 0)))

plot(x, dt(x, df = 10, ncp = 1), type = "l", lwd = 2, col = "blue",
     xlab = "x", ylab = "f(x)", main = expression(paste(n == 10, ",  ", delta == 1)))

plot(x, dt(x, df = 10, ncp = 2), type = "l", lwd = 2, col = "blue",
     xlab = "x", ylab = "f(x)", main = expression(paste(n == 10, ",  ", delta == 2)))

plot(x, dt(x, df = 10, ncp = 3), type = "l", lwd = 2, col = "blue",
     xlab = "x", ylab = "f(x)", main = expression(paste(n == 10, ",  ", delta == 3)))

par(mfrow = c(1, 1))
Figure 47.1: Noncentral t Probability Density Function (df = 10) for various noncentrality values

47.2 Purpose

The Noncentral t distribution is the theoretical backbone of power analysis for t-tests. Its primary applications include:

  • Sample size planning: determining how many observations are needed to detect a given effect size with a desired power level
  • Power analysis: computing the probability of rejecting the null hypothesis when a true effect of specified size exists
  • Clinical trial design: justifying sample sizes in protocols for regulatory submissions
  • Grant proposals: providing statistical justification for the planned study size
  • Understanding Type II error: quantifying the probability of failing to detect a real effect (\(\beta = 1 - \text{power}\))

Relation to the t-test. Under the null hypothesis (\(H_0: \mu = \mu_0\)), the t statistic follows a central t distribution (with \(\delta = 0\)). Under the alternative hypothesis (\(H_1: \mu = \mu_1 \neq \mu_0\)), the t statistic follows a Noncentral t distribution with a noncentrality parameter that depends on the true effect size and the sample size.

47.3 Distribution Function

There is no elementary closed form for the CDF. It is computed numerically by pt(x, df, ncp) in R.

The figure below shows the Noncentral t Distribution Function for \(n = 10\) and \(\delta = 2\).

Code
x <- seq(-6, 12, length = 1000)
plot(x, pt(x, df = 10, ncp = 2), type = "l", lwd = 2, col = "blue",
     xlab = "x", ylab = "F(x)", main = "Noncentral t Distribution Function",
     sub = expression(paste(n == 10, ",  ", delta == 2)))
Figure 47.2: Noncentral t Distribution Function (df = 10, delta = 2)

47.4 Moment Generating Function

The moment generating function of the Noncentral t distribution does not have a simple closed form. Moments are typically computed from the recursive formulas below.

47.5 Expected Value

\[ \text{E}(X) = \delta \sqrt{\frac{n}{2}} \, \frac{\Gamma\!\left(\frac{n-1}{2}\right)}{\Gamma\!\left(\frac{n}{2}\right)} \quad \text{for } n > 1 \]

For large \(n\), \(\text{E}(X) \approx \delta\).

47.6 Variance

\[ \text{V}(X) = \frac{n(1 + \delta^2)}{n - 2} - \frac{n \delta^2}{2} \left[\frac{\Gamma\!\left(\frac{n-1}{2}\right)}{\Gamma\!\left(\frac{n}{2}\right)}\right]^2 \quad \text{for } n > 2 \]

47.7 Median

There is no closed-form expression for the median. It is computed numerically via qt(0.5, df, ncp) in R:

# Median for t(n = 10, delta = 2)
qt(0.5, df = 10, ncp = 2)
[1] 2.053691

47.8 Mode

The mode of the Noncentral t distribution does not have a simple closed-form expression. It can be found numerically by maximizing the density function. For moderate to large \(n\) and \(\delta > 0\), the mode is approximately equal to \(\delta\).

47.9 Coefficient of Skewness

The Noncentral t distribution is skewed. For \(\delta > 0\) the distribution is right-skewed, and for \(\delta < 0\) it is left-skewed. The skewness depends on both \(n\) and \(\delta\) and does not simplify to a compact formula. It can be computed numerically from the centered moments.

When \(\delta = 0\) (central case), the skewness is zero by symmetry. As \(n \to \infty\), the skewness approaches that of \(\text{N}(\delta, 1)\), which is zero.

47.10 Coefficient of Kurtosis

The kurtosis of the Noncentral t distribution depends on both \(n\) and \(\delta\) and requires \(n > 4\) for existence. It exceeds the Normal kurtosis of 3, indicating heavier tails. As \(n \to \infty\), the kurtosis approaches 3.

47.11 Parameter Estimation

The Noncentral t distribution is not typically fitted to data via parameter estimation in the classical sense. Instead, the noncentrality parameter \(\delta\) is determined from the experimental design:

  • One-sample or paired t-test: \(\delta = \sqrt{n} \cdot d\), where \(d = (\mu_1 - \mu_0)/\sigma\) is the standardized effect size and \(n\) is the sample size
  • Independent two-sample t-test (equal group sizes): \(\delta = \sqrt{n/2} \cdot d\), where \(d = (\mu_1 - \mu_2)/\sigma\) is Cohen’s \(d\) and \(n\) is the total number of observations per group

Given an observed t statistic, a confidence interval for \(\delta\) can be obtained by inverting the CDF using qt().

47.12 R Module

47.12.1 RFC

The Noncentral t Distribution module is available in RFC under the menu “Distributions / Noncentral t Distribution”.

47.12.2 Direct app link

  • https://shiny.wessa.net/nctdist/

47.12.3 R Code

The following code demonstrates Noncentral t probability calculations:

# Probability density function: f(x)
dt(x = 2, df = 10, ncp = 2)

# Distribution function: P(X <= x)
pt(q = 2, df = 10, ncp = 2)

# Quantile function: find x such that P(X <= x) = p
qt(p = 0.975, df = 10, ncp = 2)

# Generate random Noncentral t numbers
set.seed(42)
rt(n = 10, df = 10, ncp = 2)
[1] 0.3556436
[1] 0.4809732
[1] 4.957836
 [1] 2.7175045 0.9830883 3.7032398 2.1186340 2.0594924 1.6528015 3.3237075
 [8] 1.6360023 5.2422904 2.2011690

47.13 Example

A clinical researcher plans a paired t-test to detect a standardized effect size of \(d = 0.5\) (a medium effect by Cohen’s conventions). Using a two-sided test at \(\alpha = 0.05\), how many subjects are required to achieve 80% power?

The noncentrality parameter for a paired t-test is \(\delta = \sqrt{n} \cdot d\), and the degrees of freedom are \(n - 1\). We compute power for various sample sizes:

d     <- 0.5      # standardized effect size
alpha <- 0.05     # significance level (two-sided)

# Compute power for sample sizes from 10 to 80
n_vals <- seq(10, 80, by = 5)
power  <- numeric(length(n_vals))

for (i in seq_along(n_vals)) {
  n     <- n_vals[i]
  df    <- n - 1
  delta <- sqrt(n) * d
  t_crit <- qt(1 - alpha/2, df = df)
  # Power = P(|T| > t_crit) under the alternative
  power[i] <- 1 - pt(t_crit, df = df, ncp = delta) +
                  pt(-t_crit, df = df, ncp = delta)
}

result <- data.frame(n = n_vals, delta = sqrt(n_vals) * d, power = round(power, 4))
print(result)

# Find minimum sample size for 80% power
n_required <- n_vals[which(power >= 0.80)[1]]
cat("\nMinimum sample size for 80% power:", n_required, "\n")
    n    delta  power
1  10 1.581139 0.2932
2  15 1.936492 0.4379
3  20 2.236068 0.5645
4  25 2.500000 0.6697
5  30 2.738613 0.7540
6  35 2.958040 0.8195
7  40 3.162278 0.8694
8  45 3.354102 0.9066
9  50 3.535534 0.9339
10 55 3.708099 0.9537
11 60 3.872983 0.9678
12 65 4.031129 0.9778
13 70 4.183300 0.9848
14 75 4.330127 0.9896
15 80 4.472136 0.9930

Minimum sample size for 80% power: 35 
Code
d     <- 0.5
alpha <- 0.05
n_vals <- seq(5, 100, by = 1)
power  <- numeric(length(n_vals))
for (i in seq_along(n_vals)) {
  n     <- n_vals[i]
  df    <- n - 1
  delta <- sqrt(n) * d
  t_crit <- qt(1 - alpha/2, df = df)
  power[i] <- 1 - pt(t_crit, df = df, ncp = delta) +
                  pt(-t_crit, df = df, ncp = delta)
}
plot(n_vals, power, type = "l", lwd = 2, col = "blue",
     xlab = "Sample size (n)", ylab = "Power",
     main = "Power curve: paired t-test",
     sub = expression(paste(d == 0.5, ",  ", alpha == 0.05, " (two-sided)")))
abline(h = 0.80, lty = 2, col = "red")
legend("bottomright", legend = c("Power", "80% threshold"),
       col = c("blue", "red"), lty = c(1, 2), lwd = c(2, 1))
Figure 47.3: Power curve for paired t-test (d = 0.5, alpha = 0.05, two-sided)

You can reproduce this analysis interactively using the Noncentral t Distribution app:

Interactive Shiny app (click to load).
Open in new tab

47.14 Random Number Generator

Random variates from the Noncentral t distribution can be generated directly from its construction. If \(Z \sim \text{N}(\delta, 1)\) and \(V \sim \chi^2(n)\) are independent, then \(T = Z / \sqrt{V/n}\) follows \(\text{t}(n, \delta)\).

set.seed(123)
N     <- 1000
n     <- 10
delta <- 2

# Construction method
z <- rnorm(N, mean = delta, sd = 1)
v <- rchisq(N, df = n)
x_construct <- z / sqrt(v / n)

# Built-in function
x_rt <- rt(N, df = n, ncp = delta)

cat("Construction: mean =", round(mean(x_construct), 4),
    "  var =", round(var(x_construct), 4), "\n")
cat("rt():         mean =", round(mean(x_rt), 4),
    "  var =", round(var(x_rt), 4), "\n")

# Theoretical mean
E_theory <- delta * sqrt(n/2) * gamma((n-1)/2) / gamma(n/2)
cat("Theoretical:  mean =", round(E_theory, 4), "\n")
Construction: mean = 2.1936   var = 1.5229 
rt():         mean = 2.1582   var = 1.636 
Theoretical:  mean = 2.1674 
Code
set.seed(123)
x <- rt(1000, df = 10, ncp = 2)
hist(x, breaks = 40, col = "steelblue", freq = FALSE,
     xlab = "x", main = "Noncentral t Random Numbers (N = 1000, df = 10, delta = 2)")
curve(dt(x, df = 10, ncp = 2), add = TRUE, col = "red", lwd = 2)
legend("topright", legend = "Theoretical density", col = "red", lwd = 2)
Figure 47.4: Histogram of simulated Noncentral t random numbers (N = 1000, df = 10, delta = 2)
Interactive Shiny app (click to load).
Open in new tab

47.15 Property 1: Noncentrality and Power

The noncentrality parameter \(\delta\) determines the statistical power of a t-test. Under the alternative hypothesis, the t statistic follows \(\text{t}(n-1, \delta)\) for a one-sample or paired test. Power is the probability that \(|T|\) exceeds the critical value:

\[ \text{Power} = 1 - \text{P}\!\left(-t_{\alpha/2, n-1} \leq T \leq t_{\alpha/2, n-1}\right) \quad \text{where } T \sim \text{t}(n-1, \delta) \]

As \(\delta\) increases (larger effect or larger sample), the Noncentral t density shifts further from zero, and a greater proportion of its area falls beyond the critical values, yielding higher power.

47.16 Property 2: Noncentrality Formulas for t-tests

The noncentrality parameter links the effect size, sample size, and the t distribution:

  • One-sample t-test: \(\delta = \sqrt{n} \cdot d\) where \(d = (\mu - \mu_0)/\sigma\)
  • Paired t-test: \(\delta = \sqrt{n} \cdot d\) where \(d = \mu_D / \sigma_D\) and \(n\) is the number of pairs
  • Independent two-sample t-test (equal groups of size \(n\)): \(\delta = \sqrt{n/2} \cdot d\) where \(d = (\mu_1 - \mu_2)/\sigma_{\text{pooled}}\)

These formulas make it clear that increasing the sample size \(n\) increases \(\delta\) and therefore increases power.

47.17 Property 3: Convergence to Normal

As \(n \to \infty\), the Noncentral t distribution converges to the Normal distribution:

\[ \text{t}(n, \delta) \xrightarrow{d} \text{N}(\delta, 1) \quad \text{as } n \to \infty \]

This follows directly from the construction, since \(\sqrt{V/n} \xrightarrow{p} 1\) as \(n \to \infty\) by the law of large numbers.

47.18 Related Distributions 1: Central t as Special Case

The Student t distribution is the Noncentral t distribution with noncentrality parameter \(\delta = 0\) (see Chapter 25):

\[ \text{t}(n, 0) = \text{t}(n) \]

47.19 Related Distributions 2: Normal as Limiting Case

As \(n \to \infty\), the Noncentral t distribution converges to a Normal distribution with mean \(\delta\) and variance 1:

\[ \text{t}(n, \delta) \xrightarrow{d} \text{N}(\delta, 1) \quad \text{as } n \to \infty \]

47.20 Related Distributions 3: Chi-squared Connection

The construction of the Noncentral t distribution involves a Chi-squared variate in the denominator. Specifically, if \(V \sim \chi^2(n)\) (see Chapter 23), the square root \(\sqrt{V/n}\) provides the scaling.

47.21 Related Distributions 4: Link with the Noncentral F Distribution

The square of a Noncentral t variate with \(n\) degrees of freedom and noncentrality \(\delta\) follows a Noncentral F distribution (see Chapter 48):

\[ T^2 = \text{F}(1, n, \delta^2) \quad \text{where } T \sim \text{t}(n, \delta) \]

This relationship connects power analysis for t-tests to power analysis for F-tests with numerator degrees of freedom equal to 1.

47.22 Related Distributions 5: Noncentral Chi-squared Connection

The noncentrality parameter \(\delta^2\) of the squared Noncentral t variate corresponds to the noncentrality of the Noncentral Chi-squared distribution (see Chapter 24). Specifically, in the numerator of the construction, \(Z^2 \sim \chi^2(1, \delta^2)\) where \(Z \sim \text{N}(\delta, 1)\).

46  Frechet Distribution
48  Noncentral F Distribution

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences