There are 2 closely related quantities in statistics - correlation (often referred to as ) and the coefficient of determination (often referred to as ). Today we’ll explore the nature of the relationship between and , go over some common use cases for each statistic and address some misconceptions.


The correlation of 2 random variables and is the strength of the linear relationship between them. If A and B are positively correlated, then the probability of a large value of increases when we observe a large value of , and vice versa. If we are observing samples of and over time, then we can say that a positive correlation between and means that and tend to rise and fall together.


Covariance and Correlation

In order to understand correlation, we need to discuss covariance. The variance of a random variable is , where is the expected value of A. The variance is a measure of the spread or dispersion of a random variable around its expected value. Note that variance is not a scale invariant feature - if we have some random variable measured in miles and we convert it to kilometers, then the variance of the random variable will increase.

Covariance is the extension of variance to the 2-variable case - it is a measure of the joint variability of 2 random variables. The covariance of and is . Note that like variance, covariance is not scale invariant. If the variance of or increases, then increases as well. Since the covariance is the product of the dispersions of and from the mean, is largest when and move together, and is negative when they move opposite from each other. These are the same properties that correlation exhibits, and this is not a coincidence. The correlation of and is the covariance of and normalized by their variances. That is, .

Correlation Features and Bugs

There are a few important features of correlation that we should talk about:

  • The correlation between and is only a measure of the strength of the linear relationship between and . Two random variables can be perfectly related, to the point where one is a deterministic function of the other, but still have zero correlation if that function is non-linear. In the following graph the X and Y variables are clearly dependent, but because their relationship is strongly non-linear, their correlation is close to zero.


  • There is a simple geometric interpretation of correlation. In the following analysis I will assume that and have expected value 0 in order to make the math easier, but the results still hold even if this is not the case. Let’s say we take n samples of and n samples of , and then form vectors and from the samples of and . The empirical variance of is then . Similarly, the empirical covariance of and is . Therefore, since (by the definition of dot product) , the cosine of the angle between and is equivalent to the correlation of and .

Angle Between 2 vectors

It’s worthwhile to note that this property is useful for reasoning about the bounds of correlation between a set of vectors. If vector is correlated with vector and vector is correlated with another vector , there are geometric restrictions to the set of possible correlations between and .

  • Correlation is invariant to scaling and shift. That is, . This property is a double edged sword: correlation can detect a relationship between variables on very different scales, but it can be insensitive to changes in the distributions of variables if those changes only affect scale and shift.

Using Correlation as a Performance Metric

Lets say you are performing a regression task (regression in general, not just linear regression). You have some response variable , some predictor variables , and you’re designing a function such that approximates . You want to check how closely approximates . Can you use correlation? There are definitely some benefits to this - correlation is on the easy to reason about scale of -1 to 1, and it generally becomes closer to 1 as looks more like . There are also some glaring negatives - the scale of can be wildly different from that of and correlation can still be large. Lets look at some more useful metrics for evaluating regression performance.

Mean Square Error

The Mean Square Error (MSE) of our regression is . Does this look familiar? It should - if we have for all , then this becomes . So any function that does better than just predicting the mean should have lower MSE than the variance of .

Lets say that we’re trying to estimate the weight of an object, and is measured in kg. Then the MSE of our regression will be measured in kg^2, which isn’t all that easy to reason about. For this reason we often also look at the Root Mean Square Error (RMSE), defined as the square root of the MSE. The RMSE of our regression is an estimate of how wrong our regression is on average.

The Coefficient of Determination

Like everything in statistics, there are a number of problems with MSE and RMSE. For example, their scales depend on the units of . This means that we can’t easily compare the performance of models across related tasks. For this reason, it seems that we would benefit from defining a unit-invariant metric that scales the MSE by the variance of . This metric, , is the coefficient of determination, .

So lets get a sense of the range of . It’s pretty clear that a model that always predicts the mean of will have an MSE equal to and an of 0. A model that is worse than the mean-prediction model (such as a model that always predicts a number other than the mean) will have a negative . A model that predicts perfectly will have an MSE of 0 and an of 1.


So what is the relationship between and ? It’s pretty clear that computing the coefficient of determination is not always as simple as squaring the correlation, since can be less than 0. However, there are certain conditions under which the squared correlation is equivalent to the coefficient of determination.

For example, let’s consider the case where we fit a linear regression to some dataset and compute and between and the predicted values (i.e. the training/in-sample and ). In this case there are a few nice properties. First, if we use an intercept term we can guarantee that (i.e. the means of the predicted values and the true values are equal). Next, we can decompose the sum of the squared errors of (, basically ) into a component that is “explained” by the regression and a component that is “not explained” by the regression. The “explained” component is the sum of the squared deviances of the regression values from the mean (), and the not explained component is the sum of the squared residual values (). Therefore and . You can check out the proof of this here.

So why is this fact useful? Well . Therefore, under these conditions is equal to the ratio of the variance explained by the regression to the total variance (which is a fact you may have heard out of context).

Now we can prove that the square of the correlation coefficient is equivalent to the ratio of explained variance to the total variance. Let’s start with the definition of correlation:

Which is exactly the square root of . Note that on the third step we use the fact that the sum of the in sample residuals for a linear regression is zero.

Let’s take a step back. We’ve shown that when we are comparing the predictions of a linear regression model to the truth values over the training data, then the square of the correlation is equivalent to the coefficient of determination. What about over the test data? Well, in the case where the features are completely uncorrelated with the response values, the linear regression will end up predicting the mean of the training data. If this is different from the mean of the test data, then the over the test data will be negative.

In fact, the square of the correlation coefficient is generally equal to the coefficient of determination whenever there is no scaling or shifting of that can improve the fit of to the data. For this reason the differential between the square of the correlation coefficient and the coefficient of determination is a representation of how poorly scaled or improperly shifted the predictions are with respect to .


Both , MSE/RMSE and are useful metrics in a variety of situations. Generally, is useful for picking up on the linear relationship between variables, MSE/RMSE are useful for quantifying regression performance and is a convenient rescaling of MSE that is unit invariant. And remember, when somebody quotes an number for you, make sure to ask whether it’s or the square of .