Video 2 of 6 of the Understanding Statistics guide

Common Errors 27:29

Antony Davies highlights a few common errors people make when interpreting statical measures, including spurious relationships, reverse causality, third variable effects, and aggregation bias.

Transcript

Antony Davies: So one of the things that people like to talk about, people who maybe don’t have much familiarity with statistics, but enough to kind of feel their way around, they’ll talk about correlation, right? And correlation, we understand to be the degree to which two things move together. So when economists say things like, for example, “Increased trade is good for the economy.” Increased [00:00:30] trade is associated with increased household income. It’s associated with less unemployment. It’s associated with less poverty. One of things people say is, “Well, what’s the correlation between those two things?” And correlation is kind of a … it’s a good and a bad thing. It’s good in the sense that it’s this nice clean number. It goes from 0 to 1, right? Or negative 1 to positive 1, depending on what kind of correlation you’re using, but it’s this nice little compact thing, where we understand [00:01:00] that 0 means these two things aren’t correlated, and 1 means they’re very highly correlated, right? So people have this kind of impression that the closer your correlation gets to 1, the more correct your statement is, whatever the statement is you just made.

So there are some things we have to be careful about when it comes to correlation. And one of the things is what we call a spurious relationship. A spurious relationship is a relationship that, statistically speaking, [00:01:30] looks like a nice strong relationship. Here are two things and they seem to move together, and they’ve got a nice high correlation. That’s all well and good, but what you’re seeing is simply due to random chance. There isn’t any real relationship here. It’s just randomness that you’re seeing. And we call that spuriousness or a spurious relationship.

A simple case in point. You flip a coin. It comes up heads, and you look on the news and you see that the stock market went up. [00:02:00] Next day, you flip a coin. It comes up heads. You look at the news, you see the stock market went up. Next day, you flip a coin. It comes up tails and you see the stock market went down. And you say, “Good God! I’ve got the magic coin that predicts the stock market, right? Every time it’s heads the stock market goes up. Every time it’s tails, the stock market goes down.”

Now the fact is we all know that that coin has no relationship whatsoever to the stock market. What’s going on is that by random chance, you happened to match up the coin with the stock [00:02:30] market. That’s a spurious relationship.

Now the unfortunate thing about spurious relationships is that by random chance, some of them are going to persist for a long time. So we have an example here. This is actual data. Number of sunspots in the current year … This is 1960 to 1980.

So do you see the number of sunspots in the current year and you see the number of Republicans in the Senate one year later. And what [00:03:00] you see is this is going over a 20-year period, as the number of sunspots declines, one year later, the number of Republicans in the Senate declines. The sunspots go up one year later, the number of Republicans in the Senate go up, right? And somebody might look at this and say, “Well, yeah. This is gotta be a spurious relationship. There is no relationship between sunspots and number of Republicans. It’s just random chance. That’s true. Except that this thing persisted for 20 years. [00:03:30] 20 years? And it leaves you wondering, “Well, look. Even though I know, rationally, there is no relationship between sunspots and Republicans, this apparent relationship is showing up in the data. Maybe I could use it to predict election outcomes.” So you’re kind of making the statement, “I know these two things aren’t related, but I also know that they’re showing up being correlated in the data, so maybe I can make use of this.”

[00:04:00] Here’s the problem. It took 20 years for this data to accumulate, so this nice picture that you’re seeing, you would not have seen this picture in 1960. You wouldn’t have seen it in 1970. It’s not until 1980 that there’s enough data there to construct this picture, and you look at it and you say, “Oh, my God! Look at this thing. Let’s start using sunspots to predict Republicans.” Now the problem is that no [00:04:30] spurious relationship is guaranteed to persist. If you move time forward and look at the next 15, 20 years, what you find is that the relationship disappeared. So again we see number of sunspots going up and down, and one year later the number of Republicans in the Senate going up and down, but they’re no longer moving lockstep with the sunspots. So this relationship … This is the problem with a spurious relationship. Because it’s due to random [00:05:00] chance, even though it’s there and it exists, there’s nothing to guarantee it’s going to be there tomorrow. And so if you start to base decisions on this thing, you’re basing decisions on a relationship that could disappear at any moment. And this is what would have happened if we had started making our election outcome predictions based on sunspots, we would have found that we would have very bad predictions because we’re based on this spurious relationship that just disappeared.

[00:05:30] So one problem with correlation is we have to be careful about spurious relationships. Things that appear to be correlated and, in fact, are only correlated random chance. Another thing we have to be careful about are what we call third variable effects. This is two things that are correlated, and they’re correlated because of some real underlying phenomenon. It’s not random chance. However, [00:06:00] the underlying phenomenon is not that the two things are mutually causal.

I’ll give you an example. If you look around at population data in the United States, you will find that communities that have more churches also experience more crime. The two are correlated. More churches goes with more crime. And, I can tell you that the relationship is not spurious. It’s not random chance. [00:06:30] There’s actually a relationship here. But here’s where we go wrong. We go wrong if we think that the relationship, just because churches and crime are correlated that churches must cause crime, or maybe crime causes churches, right? What’s going on is what we call a third variable effect. And third variable effect means that there is some other phenomenon that is correlated with these two things. And so when this other phenomenon does what it does, these two things move, [00:07:00] and appear to be moving together, and appear to be related when, in fact, they aren’t. The third variable here is population size. The more people you get, the more crime you’ll get because you have more people. But the more people you get, the more churches you’ll get because you have more people.

There’s a beautiful example of this. This happened, oh maybe, I wanna say 15, 20 years ago. A major soft drink manufacturer was rolling out product to try to expand market share. And [00:07:30] they were rolling out product in India, in what was at the time, a relatively new thing in India to have vending machines. So you’re selling … this manufacturer was selling the soft drink in the vending machines. An interesting thing happened. This company introduced vending machines in a city in India, and a couple of weeks later there’s an outbreak of hepatitis. And they introduced the vending machines in another city in India, and a couple of weeks later there’s an outbreak of hepatitis. [00:08:00] And then a third city in India, and a couple of weeks later there’s an outbreak of hepatitis. And this kept going on, and it got to the point that health officials were becoming quite concerned that this company’s product was tainted in some fashion that’s causing hepatitis. And this is a good example of a correlation. There’s a very tight correlation between a company puts a vending machine, two weeks later hepatitis, right?

What was going on, interestingly, was a third variable effect. That is, these two things were indeed correlated. [00:08:30] The company’s product and the hepatitis. They were correlated, but they weren’t causal. One wasn’t causing the other. What was going on was this third variable effect, that the children largely couldn’t afford to buy a can of this product, so they would pool their coins, buy one can and share it amongst themselves. It was the sharing of the product that was causing the hepatitis, right? [00:09:00] It’s a third variable effect.

So the warning here is, with correlation, two things. Just because you see a tight correlation doesn’t mean that there’s actually a relationship. It could simply be random chance. We call that spuriousness. Furthermore, just because you see a correlation and there is a relationship there doesn’t mean that the relationship is causal. It could be a third variable effect that these two things or, actually, neither causing the other. It’s a third variable that’s causing both of them.

[00:09:30] Another thing we have to be careful of, when it comes to correlation, is reverse causality. A good example of this is, you know, every morning you set your alarm and every morning the sun rises. There is a causal relationship here. It’s not spurious. And it’s not a third variable effect. The causal relationship is between these two things. But, just because you set the alarm and [00:10:00] the sun rises doesn’t mean that your setting your alarm causes the sun to rise. In fact, the causality moves in the other direction. Because you anticipate the sun rising at a certain time, you set your alarm appropriately. So this is one more thing we have to be careful of when we talk about correlation, that we aren’t … Just because we see a relationship, and it’s not spurious, it’s real. And just because it’s actually is causal … we got one thing causing the other … doesn’t mean that the causality runs in the direction that we think it does.

So one interesting [00:10:30] set of correlations to look at is the relationship between economic freedom and socio-economic outcomes. And I’m showing you here the relationship between economic freedom and the global peace index. So every dot is a country and they’re measured horizontally by economic freedom as measured by the Fraser Institute. So to the right means that the country’s experienced more economic freedom. That is, the government is less intrusive into people’s economic [00:11:00] decisions. Taxes are lower, regulation is less, this sort of thing. To the left is less economic freedom. So the government is more intrusive in people’s economic decisions.

Up and down is the global peace index, so up is the country is less peaceful. And it’s not just a matter of being less peaceful with regard to neighboring states, but also the country’s less peaceful to its own citizens. So if they put, you know, use violence to put down protests, this sort of thing, the country would score [00:11:30] high on this peace index. And by high, it’s an inverse scale, so high means less peaceful, low means more peaceful.

And what you see here is an apparent correlation. They are clearly exceptions, but remember this is a stochastic relationship. Exceptions are to be expected. What’s interesting is the trend. On average, it appears that as countries are more economically free, they also score better on the [00:12:00] global peace index.

Interestingly, you find this same kind of phenomenon, correlations of economic freedom, with all sorts of other interesting things. Countries that are more economically free tend to have, on average, lower poverty rates than countries that are less economically free. And this is not just true for the rich countries. It’s also true for the poor countries. You know, because you might say, “Well, yes. Rich countries tend to be economically free because we have [00:12:30] the leisure to be concerned with economic freedom and to tell the government to stay out our lives. We wanna do what we want to do. Oh, and by the way, because we’re rich, we’re gonna have less child labor. We’re gonna have less poverty.” Okay. Fine. But if you look at the poor countries, poor countries that are economically free, although they have very high child labor rates, and they have very high poverty rates, those poverty rates and child labor rates are lower for the poor economically free countries than they are for [00:13:00] the poor economically unfree countries.

So, no matter how you slice it, you see this recurring theme that countries that are more economically free, they scored better for child labor, they scored better for poverty. Interestingly, they scored better for environment measures like pollution, deforestation. You see, in this data, that they scored better for peace. They also scored better for income, which is kind of to be expected, right? Economically free countries, you’d think of the more developed [00:13:30] countries which also have high incomes. But if you look at the poor countries, poor countries are economically free, they’re incomes are low, but they’re higher than they are for poor countries that are economically unfree.

Interestingly, you see the same thing with inequality. Countries that are more economically free have less income inequality than do countries that are less economically free.

So there’s interesting correlations here. And you can … [00:14:00] All the arguments still apply. How do we know these relationships aren’t spurious? How do we know that there’s not a third variable effect?

These are all very good things and there are economists who look into this data and address these questions.

What is interesting to me is that no matter how you slice the data. Whether you’re looking at differences among countries or differences among states in the United States, or differences among cities, or differences across time, [00:14:30] the same pattern keeps emerging again and again and again. That you get better socioeconomic outcomes in countries, cities, states, that are more economically free.

Now, one possible argument here is that well, economic freedom causes better outcomes, ‘cause we’re seeing this correlation. And of course you can’t say that, because we don’t know [00:15:00] is it reverse causality? Is it that countries that are more economically rich, the countries that are cleaner environments, countries have less inequality. Do they demand more economic freedom, right? Does the causality go the other direction? Is there a third variable effect, something that we haven’t thought of causing both these things, the good outcomes and the economic freedom? And I don’t know the answer to that.

So what I cannot say is that this data [00:15:30] indicates that economic freedom causes good things. What I can say though is that because every way you look at it you see the correlation going in that direction. More economic freedom correlated with good outcomes. What you can say is that economic freedom does not cause badness. That is, correlation does not imply causation, but the absence of correlation [00:16:00] does imply the absence of causation, because I don’t see economic freedom correlated with bad outcomes. I can conclude that economic freedom does not cause the bad outcomes.

Now, there’s a technical footnote here that goes along the lines of, well, it is possible that there could be some third variable effect that if it is negatively correlated with economic freedom and positively correlated with this outcome, [00:16:30] and that it’s the magnitude of the effect is large enough to outweigh the magnitude of the effect of economic freedom that, in fact, the correlation does go in the other direction. We’re just not seeing here. And I’m not going to go into that argument, largely because it’s highly technical, but I will tell you this. It is an argument, but there’s a tremendously high bar for that argument to get over to become meaningful.

Generally speaking, [00:17:00] generally speaking, you’re safe to make the statement that correlation does not imply causation, but the absence of correlation does imply the absence of causation.

Student: The Gini coefficient, would you say it’s an accurate measure for income inequality?

Antony Davies: The Gini coefficient question is a good one. It leads into the next topic. There are a variety of [00:17:30] economic problems, in my opinion, with the whole idea of inequality. It ignores half of the economy. We only look at … When we look at transactions, and we think about inequality, we look at the people who are accumulating dollars. We don’t look at the people who are accumulating goods and services in exchange for those dollars, right? But those are economic issues. There are some statistical issues with the idea of inequality. Put aside how it’s what particular measure you use, [00:18:00] just the concept of inequality raises a problem, at least statistically. And that’s called aggregation bias. Aggregation bias occurs when you take a whole bunch of data and you average pieces of it together, and you then look at those pieces and draw some conclusion about the individual people on the basis of the averages. And sometimes, not always, [00:18:30] but sometimes, the conclusions you draw can be faulty.

I’ll give you a good example. Let’s suppose we’re going to calculate income inequality for a group of people, and we ask everyone to come into the room and we say, “What is your income?” And we’ve got … You’ve just started your career, so your income is very low. You’re a little bit further on in your career. You’re income is higher. I’m further. Mine’s [00:19:00] higher. These two gentlemen are coming close to retirement. Their incomes are quite high. And if we calculate inequality for this table, we get some decent inequality from low incomes to very high incomes.

So we go away and we reconvene 10 years later, and 10 years later, you two are sitting in this position. You’re mid-career. Your incomes are moderate. I’m sitting over there. I’m close to retirement. My income’s quite high. These two gentlemen have retired. [00:19:30] They’re gone. And we’re placing you two, are two young people who’ve just entered the job market with low incomes. And if, again, we calculate inequality, again we get this decent inequality. We got poor people here. We got rich people here.

Well, here’s the interesting thing. If this is how we progress around the table, people coming into the job market moving up, middle career, retirement, go live in Florida. Over the course of our careers, every one of us earns exactly the same income. [00:20:00] So over the course of our careers, we have perfect equality, even though every time we look we see inequality.

Now, I’m not making the argument that there is no inequality in the world. The argument I’m making is when we go to measure inequality we take snapshots of the world, like looking at this table and saying, “Okay. What’s the difference in our incomes?” And we can, in doing that, miss large components [00:20:30] of equality.

Good case in point. We talk a lot in this country, when we talk about inequality, we’ll say things like, “In 2000, the poorest 20% of Americans earned 3.8% of all the income, and in 2007, the poorest 20% of Americans earned 3.4% of all the income.” So you look at those two things, you say, “Well, look. The poor Americans. Their [00:21:00] lot is not improved. In fact, it’s worsened a bit over these years. They used to get 3.8% of all the income, now they earn 3.4% of all the income.” And so we’re concerned about that and we talk about this. We say, “The stagnation of poverty. There are people here that are trapped and they’re always there, and it’s always 20% of the population’s earning 3% of the income, whatever it is.” That’s an … at least in part, at least in part, it’s an aggregation bias. We’ve taken a bunch of [00:21:30] individuals and we’ve put them together into a single measure. And we look at that measure and we assume that what is true of the measure is true of the individuals. That’s not necessarily the case.

I give you another example. In 2000, the youngest 20% of Americans were 7.1 years old. In 2010, the youngest 20% of Americans were 6.9 years old. [00:22:00] Now, if you apply the same logic to these people’s ages that we did to their incomes, you would conclude that these young Americans, not only did they not get older, they actually got younger over the course of 10 years, right? Their average age was 7.1. Now their average age is 6.9. Of course, what’s going on here is, and it’s interesting to think about, because nobody got younger. We all got older, and yet the youngest 20% of Americans have a younger age. How is this happening? Of course, [00:22:30] what’s happening is people are aging over the course of this decade, and they’re no longer part of the youngest 20%. And new people are being born and they’re born into this youngest part of the 20%.

So when we talk about the youngest 20%, that’s an aggregation. And we compare the youngest 20% in 2000 to the youngest 20% in 2010. They’re different sets of people. Some are the same, right? Some are the same. But a lot of them are different. New people have come in, old people have gone out. Just like in the example of [00:23:00] the table. We come back here in 10 years, these guys are gone, I’m moved over there, you’re over here, and we got two new people. It’s a different set of people.

Similarly, when we talk about the poorest Americans in 2000, the poorest Americans in 2007, some of those people are still there. Some of the people who constituted the poorest Americans are still amongst the poorest Americans in 2007, but also a lot of them are different. Some of these people who were amongst the poorest in 2000, now have higher incomes. They’re no longer amongst the poorest. [00:23:30] We’ve had immigrants. We’ve had young people enter the workforce, and they’re now amongst the poorest Americans in 2007. They weren’t there before. So at least in part, it’s a different set of people.

Moral of the story is be careful, be careful when you look at aggregated data, averages of groups of people. What’s true of the average, what’s true of the aggregation, is not necessarily true of the individuals that comprise the aggregation.

[00:24:00] A beautiful example of this is this picture. So we hear the thing about wage stagnation amongst the middle class. And what you’re seeing here, the blue line is median worker compensation. So, just to be clear about this, this is compensation means people’s incomes and employer-paid benefits. So everything that you get, as a result of your job.

Median worker means we’ve lined up all the American workers from [00:24:30] poorest to richest and we’re taking the guy in the middle, and the 2014 dollars means that it’s adjusted for inflation. So what you’re seeing, and there’s nothing special about the years. These are all the years that were readily available from the Census Bureau at the time. What you’re seeing are the years 1992 through 2013, and the blue line is pretty flat. This is the story. The blue line is what leads us to this conclusion that median worker compensation hasn’t changed over the past, you know, [00:25:00] 20 years.

Now, if you look at the red line, this is a little bit different. This is compensation over the median career, and you have to get your head around what’s going on here. Picture the red line as follows: In 1992, we asked, not we, the Census Bureau asked people, set of workers, what is your median … what’s your income, and then we picked the median one from this set of workers. [00:25:30] And in 1992, there are people who were just joining the labor market, so think 20-year-olds, 22-year-olds, something like that. The median income of these 20-, 22-year-olds is what you see on the left side of that red line.

Then each year, the Census Bureau goes back and asks those same people, “What’s your income?” And what you see is, over the course of their careers, their income is rising and rising and rising. It kind of plateaus around 2007, [00:26:00] right? But it’s certainly not the story of stagnation. That red line represents the actual and actual person’s experience going through the course of his career. He starts out low, he earns more and, eventually, he ends up at some higher level of income. That’s a very different story than the blue line.

The problem with the blue line is that it suffers from aggregation bias. With the blue line, what you’re seeing is in 1992, [00:26:30] the median income for all the workers. In 2013, the median income for all the workers. And what we’re missing is the fact that the group of workers in 2013 are different from the group of workers in 1992. So, although each worker’s income, maybe not each worker, but at least the median worker’s income was rising over time, the median for all the workers remains constant. So, in a perverse way, the statement median worker incomes have stagnated, in English, is correct, [00:27:00] but it does not characterize what’s actually going on. What’s actually going on is that the workers are earning more money over time.