• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Descriptive Statistics & Exploratory Data Analysis
  2. 71  Pearson Correlation
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 71.1 Definition of Pearson Covariance
  • 71.2 Definition of Pearson Correlation (Pearson 1895)
  • 71.3 Definition of the Coefficient of Determination
  • 71.4 t-Test Statistic
  • 71.5 R Module
    • 71.5.1 Public website
    • 71.5.2 RFC
  • 71.6 Purpose
  • 71.7 Phi coefficient (Matthews Correlation)
  • 71.8 Pros & Cons
    • 71.8.1 Pros
    • 71.8.2 Cons
  • 71.9 Example with continuous variables
  • 71.10 Task
  1. Descriptive Statistics & Exploratory Data Analysis
  2. 71  Pearson Correlation

71  Pearson Correlation

71.1 Definition of Pearson Covariance

\[ \text{C}(xy) = \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x}) (y_i - \bar{y}) \]

where

\[ -\infty \leq \text{C}(xy) \leq +\infty \]

\[ \text{V}(x) = s_x^2 = \text{C}(xx) = \frac{1}{n} \sum_{i=1}^{n}(x_i - \bar{x})^2 \]

\[ \text{V}(y) = s_y^2 = \text{C}(yy) = \frac{1}{n} \sum_{i=1}^{n}(y_i - \bar{y})^2 \]

\[ \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i \]

and

\[ \bar{y} = \frac{1}{n} \sum_{i=1}^{n} y_i \]

Note: this chapter writes covariance and variance in a population-style form (division by \(n\)). In R, functions such as var, cov, and cor.test use sample estimators with division by \((n-1)\) where relevant. For the correlation coefficient itself, this scaling factor cancels out.

71.2 Definition of Pearson Correlation (Pearson 1895)

\[ r = r_{xy} = \frac{\text{C}(xy)}{\sqrt{\text{V}(x)\text{V}(y)}} = \frac{\text{C}(xy)}{s_x s_y} \]

where

\[ -1 \leq r_{xy} \leq 1 \]

\[ \text{C}(xy) = \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x}) (y_i - \bar{y}) \]

\[ -\infty \leq \text{C}(xy) \leq +\infty \]

\[ \text{V}(x) = s_x^2 = \text{C}(xx) = \frac{1}{n} \sum_{i=1}^{n}(x_i - \bar{x})^2 \]

\[ \text{V}(y) = s_y^2 = \text{C}(yy) = \frac{1}{n} \sum_{i=1}^{n}(y_i - \bar{y})^2 \]

\[ \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i \]

and

\[ \bar{y} = \frac{1}{n} \sum_{i=1}^{n} y_i \]

71.3 Definition of the Coefficient of Determination

\[ \text{R}^2 = r_{xy}^2 \]

where \(0 \leq \text{R}^2 \leq 1\), \(r = r_{xy} = \frac{\text{C}(xy)}{\sqrt{\text{V}(x)\text{V}(y)}} = \frac{\text{C}(xy)}{s_x s_y}\) and \(-1 \leq r_{xy} \leq 1\).

The Coefficient of Determination can be interpreted as the proportion of the Variance of \(y\) that can be explained by \(x\) (or vice versa).

71.4 t-Test Statistic

\[ t = \frac{r\sqrt{n-2}}{\sqrt{1-r^2}} \sim t_{n-2} \]

where \(r = r_{xy} = \frac{\text{C}(xy)}{\sqrt{\text{V}(x)\text{V}(y)}} = \frac{\text{C}(xy)}{s_x s_y}\) and \(-1 \leq r_{xy} \leq 1\).

Note: under the null hypothesis, the t-Test Statistic is exact when \((x,y)\) follow a bivariate normal distribution. In practice, the test is fairly robust to moderate departures from normality when the sample size is sufficiently large.

71.5 R Module

71.5.1 Public website

The Pearson Correlation module can be found on the public website:

  • https://compute.wessa.net/corr.wasp

There is also a multivariate R module available which automatically computes correlations for all bivariate combinations:

  • https://compute.wessa.net/rwasp_pairs.wasp

71.5.2 RFC

The multivariate Pearson Correlation module is available in RFC under the menu item “Descriptive / Multivariate Descriptive Statistics”.

To compute the Pearson Correlation between two quantitative variables in the R console, use the script that is described in Chapter 70.

If you prefer to compute Correlation Matrices on your local machine, the following script can be used in the R console:

A <- rnorm(150)
B <- A*3 + rnorm(150,0,2)
C <- -2*A + rnorm(150,0,5)
x <- cbind(A, B, C)
#type of correlation (possible values: 'pearson', 'spearman', 'kendall')
par1 = 'pearson'
main = 'Scatter Plots and p-values'
panel.tau <- function(x, y, digits=2, prefix='', cex.cor)
{
  usr <- par('usr'); on.exit(par(usr))
  par(usr = c(0, 1, 0, 1))
  rr <- cor.test(x, y, method=par1)
  r <- round(rr$p.value,2)
  txt <- format(c(r, 0.123456789), digits=digits)[1]
  txt <- paste(prefix, txt, sep='')
  if(missing(cex.cor)) cex <- 0.5/strwidth(txt)
  text(0.5, 0.5, txt, cex = cex)
}
panel.hist <- function(x, ...)
{
  usr <- par('usr'); on.exit(par(usr))
  par(usr = c(usr[1:2], 0, 1.5) )
  h <- hist(x, plot = FALSE)
  breaks <- h$breaks; nB <- length(breaks)
  y <- h$counts; y <- y/max(y)
  rect(breaks[-nB], 0, breaks[-1], y, col='grey', ...)
}
pairs(x,diag.panel=panel.hist, upper.panel=panel.smooth, lower.panel=panel.tau, main=main)

print(paste('Correlations for all pairs of data series (method=',par1,')',sep=''))
n <- ncol(x)
for (i in 1:(n-1)) {
  for (j in (i+1):n) {
    print(paste('Correlation(', colnames(x)[i], ',', colnames(x)[j], ')'))
    print(cor.test(x[,i],x[,j],method=par1))
  }
}
[1] "Correlations for all pairs of data series (method=pearson)"
[1] "Correlation( A , B )"

    Pearson's product-moment correlation

data:  x[, i] and x[, j]
t = 19.02, df = 148, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 0.7886335 0.8834164
sample estimates:
      cor 
0.8424232 

[1] "Correlation( A , C )"

    Pearson's product-moment correlation

data:  x[, i] and x[, j]
t = -4.3257, df = 148, p-value = 2.783e-05
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 -0.4700443 -0.1846736
sample estimates:
       cor 
-0.3350199 

[1] "Correlation( B , C )"

    Pearson's product-moment correlation

data:  x[, i] and x[, j]
t = -4.5874, df = 148, p-value = 9.495e-06
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 -0.4856307 -0.2041089
sample estimates:
       cor 
-0.3528291 

To compute the Correlation Matrices, the R code uses the pairs function which is adapted as follows: the diagonal cells contain the Histograms, the cells below the diagonal show the p-value of the correlation test, and the upper diagonal cells display the scatterplots.

71.6 Purpose

Pearson Correlations are used to identify statistical, linear relationships between pairs of variables (with a continuous distribution). In this sense the correlation coefficient is simply a mathematical collinearity measure (i.e. a number which represents to what degree the points of the Scatterplot are on a straight line). If the correlation coefficient is equal to 1 or -1 then it can be concluded that all the points of the Scatterplot lie on exactly one straight line. If the correlation coefficient is close to zero then the points are scattered and do not lie on any straight line.

Note that the Pearson Correlation (just like any other type of correlation) only exists for data that is arranged in wide format with corresponding values in both columns that are examined. For instance, it is impossible to compute the correlation between measurements of females and males.

Even though the Pearson Correlation is mostly used for continuous variables, there is a notable exception in case both variables have binary values. The correlation between two binary variables (i.e. the so-called “Phi coefficient”) is of particular interest within the context of the Confusion Matrix as discussed in Chapter 59 and the Binomial Classification metrics of Chapter 58.

71.7 Phi coefficient (Matthews Correlation)

The Matthews or Phi Correlation coefficient is defined as

\[ \phi = \text{MCC} = \frac{\text{TP} \times \text{TN} - \text{FP} \times \text{FN} }{ \sqrt{(\text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+\text{FN})} } \tag{71.1}\]

which uses the Confusion Matrix in Table 59.2 and can be interpreted like the Pearson Correlation.

If we would reconsider the example from Table 58.1 (which contains two binary variables) then each “Yes” value could be replaced by 1 and each “No” by 0. The result of this transformation is shown in Table 71.1.

Table 71.1: Fraud Predictions of Payment Transactions
Transaction Is Fraudulent? Prediction
1 0 1
2 1 1
3 0 0
4 0 0
5 1 0
6 0 0
7 1 0

Based on the values in Table 71.1 it is possible to compute the ordinary Pearson Correlation as is shown in the output below. The Pearson Correlation (\(\simeq 0.09129\)) is computationally equivalent to the Phi Correlation (as was originally developed by Karl Pearson) and is also equal to the Matthews Correlation. Note that the Student t-Test of the Pearson Correlation is not valid for binary data (only the correlation coefficient is).

x = c(0,1,0,0,1,0,1)
y = c(1,1,0,0,0,0,0)
cor(x,y)
[1] 0.09128709

To illustrate the equivalence of the ordinary Pearson Correlation and the Pearson Phi Coefficient, we use the Confusion Matrix from Table 59.1 and apply Equation 71.1 to compute the \(\phi\) value:

\[ \phi = \frac{\text{1} \times \text{3} - \text{1} \times \text{2} }{ \sqrt{(\text{1}+\text{1})(\text{1}+\text{2})(\text{3}+\text{1})(\text{3}+\text{2})} } = \frac{1}{ \sqrt{120} } \simeq 0.09129 \]

The Student t-Test Statistic does not hold for \(\phi\) because binary variables do not have a continuous distribution. Therefore it is better to use the Pearson Chi-Squared Test which is closely related to \(\phi\) and can be tested in various ways (this will be explained in Hypothesis Testing).

71.8 Pros & Cons

71.8.1 Pros

The Pearson Correlation has the following advantages:

  • it is easy to compute with most statistical software packages (even with spreadsheets)
  • most readers are familiar with the intuitive concept of correlation
  • it allows to us to detect linear relationships quickly

71.8.2 Cons

The Pearson Correlation has the following disadvantages:

  • it is sensitive to outliers
  • it does not allow us to identify non-linear relationships
  • hypothesis tests (see Hypothesis Testing) about the Pearson Correlation (which are based on the Student-t distribution) are exact under bivariate normality. For non-normal variables the Student-t test can be used as an approximation, provided the sample is sufficiently large.

71.9 Example with continuous variables

The following analysis, shows the Pearson Correlation between US coffee retail prices and import prices of Arabica coffee from Colombia. The correlation is 0.7 which implies that the variables are highly correlated (the scatter points lie relatively close to a straight line). Since the correlation coefficient is positive, it can be concluded that the slope of the straight line is also positive.

Interactive Shiny app (click to load).
Open in new tab

71.10 Task

Find an example of nonsense correlation (sometimes called spurious correlation) on the Internet. For instance, https://tylervigen.com/spurious-correlations shows many examples of highly correlated relationships which do not make much sense. Try to explain why correlations can be so highly misleading.

Pearson, Karl. 1895. “Note on Regression and Inheritance in the Case of Two Parents.” Proceedings of the Royal Society of London 58: 240–42. https://doi.org/10.1098/rspl.1895.0041.
70  Scatterplot
72  Rank Correlation

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences