Covariance and Correlation¶
Variance for a single variable¶
The expected value or mean of a random variable is the first moment, analogous to a center of mass for a rigid body. The variance of a single random variable is the second moment: it is the expectation of the squared deviation of a random variable from its mean. It is analogous to the moment of inertia about the center of mass.
where μ=E[X]
The units of Var(X) are [Var(X)]=[X]2. For that reason, it is often more intuitive to work with the standard deviation of X, usually denoted σX, which is the square root of the variance:
In statistical mechanics, you may have seen notation like this: σX=√⟨(X−⟨X⟩)2⟩
Covariance¶
When dealing with multivariate data, the notion of variance must be lifted to the concept of covariance. Covariance captures how one variable deviates from its mean as another variable deviates from it’s mean. Say we have two variables X and Y, then the covariance for the two variables is defined as
If X is on average greater than its mean when Y is greater than its mean (and, similarly, if X is on average less than its mean when Y is less than its mean), then we say the two variables are positively correlated. In the opposit case, when X is on average less than its mean when Y is greater than its mean (and vice versa), then we say the two variables are negatively correlated. If Cov(X,Y)=0, then we say the two variables are uncorrelated.
A useful identity
Correlation coefficient¶
The covariance Cov(X,Y) has units ([X][Y])−1, and thus depends on the units for X and Y. It is desireable to have a unitless measure of how “correlated” the two variables are. One way to do this is through the Correlation coefficient ρX,Y, which simply divides out the standard deviation of X and Y
where σ2X=cov(X,X) and σ2Y=cov(Y,Y)
Warning
It is common to mistakenly think that if two variables X and Y are “uncorrelated” that they are statistically independent, but this is not the case. It is true that if two variables X and Y are “correlated” (have non-zero covariance), then the two variables are statistically dependent, but the converse is not true in general. We will see this in our Simple Data Exploration.
Covariance matrix¶
When dealing with more than two variables, there is a straightforward generalization of covariance (and correlation) in terms of a covariance matrix 1. Given random variables X1,…,XN, the covariance matrix is an N×N matrix whose (i,j) entry is the covariance
If the entries are represented as a column vector X=(X1,X2,...,Xn)T, then the covariance matrix can be written as
with μX=E[X] also represented as a column vector.
Note
The inverse of this matrix, K−1XX, if it exists, is also known as the concentration matrix or precision matrix.
Correlation Matrix¶
An entity closely related to the covariance matrix is the correlation matrix 1,
Each element on the principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1.
Equivalently, the correlation matrix can be written in vector-matrix form as
where diag(KXX) is the matrix of the diagonal elements of KXX (i.e., a diagonal matrix of the variances of Xi for i=1,…,n).
Visualizing covariance as an ellipse¶
Often an ellipse is used to visualize a covariance matrix, but why? This is only well-motivated if one expects the data to be normally distributed (aka Gaussian distributed). This is because the contours of a 2-d normal are ellipses, and in higher dimensions the contours are ellipsoids.
A scatter plot of two correlated, normally-distributed variables and the error ellipse from An Alternative Way to Plot the Covariance Ellipse by Carsten Schelp.
Consider a random variable X that is distributed as a multivariate normal (aka multivariate Gaussian) distribution, e.g. X ∼ N(μ,Σ), where μ is the multivariate mean and Σ is the covariane matrix. The probability density for the multivariate normal is given by
The contours correspond to values of X where (x−μ)TΣ−1(x−μ)=Constant.
Understanding the geometry of this ellipse requires the linear algebra of the covariance matrix, and it’s a useful excercise to go through:
This notebook is duplicated from the repository linked to in this article: An Alternative Way to Plot the Covariance Ellipse by Carsten Schelp, which has a GPL-3.0 License.
This is also a nice page
With empirical data¶
We can estimate the covariance of the parent distribution pXY with the sample covariance, using the sample mean in place of the expectation EpX.
As we will see in our Simple Data Exploration and Visualizing joint and marginal distributions, the sample covariance and correlation matrices can be conveniently computed for a pandas
dataframe with dataframe.cov()
and dataframe.corr()
- 1(1,2)
Adapted from Wikipedia article on Covariance Matrix