It is, however, also possible to control for more than just one variable. In case there are \(K\) variables \(z_i\) for \(i= 1, 2, …,K\), the Partial Pearson Correlation is defined as \(r_{e_x e_y}\) where
This implies that the Partial Pearson Correlation is, in fact, directly linked to the Multiple Regression equations in which both variables \(x\) and \(y\) are explained by the control variables \(z_i\).
73.2 R Module
73.2.1 Public website
The Partial Pearson Correlation module is available on the public website:
Estimating optimal shrinkage intensity lambda (correlation matrix): 0.0089
A B C
A 1.00000000 -0.2839565 -0.07699398
B -0.28395654 1.0000000 0.91419667
C -0.07699398 0.9141967 1.00000000
attr(,"lambda")
[1] 0.008948936
attr(,"lambda.estimated")
[1] TRUE
attr(,"class")
[1] "shrinkage"
attr(,"spv")
A B C
0.22960286 0.03793588 0.04101836
A B C
A 1.0000000 -0.8848609 -0.8739970
B -0.8848609 1.0000000 0.9879927
C -0.8739970 0.9879927 1.0000000
To compute the Partial Pearson Correlation Matrix, the R code uses the pcor.shrink function from the corpcor library.
73.3 Purpose
Partial Correlations are used to investigate whether the relationship between two variables (\(x\) and \(y\)) depends on other variables (\(z_i\) for \(i = 1, 2, …, K\)). This is especially useful when one wishes to examine whether the (ordinary) Pearson Correlation \(r_{xy}\) represents a spurious result (i.e. affected by some common cause) or whether the association weakens after controlling for potential confounders.
73.4 Pros & Cons
73.4.1 Pros
Partial Correlations have the following advantages:
They can be used to compute correlations while controlling for other variables (which may avoid the spurious correlation problem).
They are closely linked to Multiple Linear Regression and provide interesting information.
73.4.2 Cons
Partial Correlations have the following disadvantages:
Most readers are not familiar with the statistical concept of Partial Correlations.
Partial Correlations are not featured in many software packages.
73.5 Example
Download the happystudent.csv dataset and use the R module shown below to:
upload the csv file
click on the Correlations tab
select all the variables from the uploaded file
change the “Type of Correlations” setting to “Partial”
The R module computes the multivariate Pearson and Partial Correlation matrices. The dataset contains information that was gathered through a survey in one of our statistics courses to investigate learning confidence (i.e. self-confidence of students to learn statistical concepts). For each student we obtained the following scores:
Happiness: measure of general happiness
Sport1: to what degree are students engaged in team sports?
Learning: degree of learning confidence
Depression: measure of depression (based on WHO)
Software: degree of software competence
The (unconditional) Pearson Correlation between Learning and Depression is (approximately) equal to -0.23 which seems to imply a weak, negative relationship (i.e. highly depressed students tend to have a lower learning confidence. Does this relationship remain if we control for the effects of all other variables?
The answer is no. The output of the R module shows that the Partial Pearson Correlation is (approximately) -0.067 (i.e. much closer to zero) implying that little linear association remains once the control variables are taken into account. This does not establish full statistical independence. To identify which variable primarily drives this reduction, one can compare the pairwise Pearson correlations between Learning and each of the remaining variables in the dataset.
73.6 Task
Based on the analysis from the previous problem, which variable is most closely related with learning confidence? Do you think that all the variables can be assumed to have a continuous distribution?
Yule, George Udny. 1897. “On the Theory of Correlation.”Journal of the Royal Statistical Society 60 (4): 812–54. https://doi.org/10.2307/2979746.