Nate Silver said there was a 29% chance Trump would win. Most people interpreted that as “Trump probably won’t win” and got shocked when he did. What was the percent attached to your “coronavirus probably won’t be a disaster” prediction? Was it also 29%? 20%? 10%? Are you sure you want to go lower than 10%? Wuhan was already under total lockdown, they didn’t even have space to bury all the bodies, and you’re saying that there was less than 10% odds that it would be a problem anywhere else? I hear people say there’s a 12 – 15% chance that future civilizations will resurrect your frozen brain, surely the risk of coronavirus was higher than that?
And if the risk was 10%, shouldn’t that have been the headline. “TEN PERCENT CHANCE THAT THERE IS ABOUT TO BE A PANDEMIC THAT DEVASTATES THE GLOBAL ECONOMY, KILLS HUNDREDS OF THOUSANDS OF PEOPLE, AND PREVENTS YOU FROM LEAVING YOUR HOUSE FOR MONTHS”? Isn’t that a better headline than Coronavirus panic sells as alarmist information spreads on social media? But that’s the headline you could have written if your odds were ten percent!
My cynical view is that the typical reader of the NYT or the WaPo does not notice the lack of consistency between how they treated the virus in February and how they treat it now. They consistently viewed President Trump as wrong, and that is the consistency that their readers care about.
But forget the complaints about media. This is a much bigger issue with human nature. Scott’s basic point is that people tend to treat low-probability events as if they could not possibly occur. Scott points out that the anti-Trump media were far from the only virus denialists back in February. The stock market also behaved like a virus denialist.
Somehow, we seem to be hardwired to think in binary terms–either we believe something will happen or we believe it won’t happen. Notice that we have understood formal binary logic since Aristotle but according to many accounts, formal probability theory waits for Pascal in the 1600s.
You may notice that when I illustrate probability on this blog, I try to avoid using decimals. Instead, I say “out of 10,000 people. . .” That is because I noticed when teaching high school students that they grasped probability much more quickly if the examples used whole numbers.
Most people are concrete thinkers. For a concrete thinker, an object is either there or it isn’t there. Probabilistic reasoning is abstract, and that makes it harder.
Casual observation suggests:
People who completed courses in Stats don’t retain or apply the concepts unless they use them regularly at work.
And many people who do use Stats at work, nonetheless don’t readily apply the concepts in the wild. (See Bryan Caplan on limits to ‘transfer of learning.’)
We focus too much on probability.
The consequence of the event matters more than its probability: Better to mistake a rock for a bear than to mistake a bear for a rock.
I think our minds are built to be paranoid. We are the descendants of humans who were paranoid. The ones who weren’t as cautious died.
But schools tell children paranoia is irrational and has no upside.
“Scott’s basic point is that people tend to treat low-probability events as if they could not possibly occur”
But the opposite is also entirely true too…people treat low-probability events as if they will occur.
Remember in the 90s when people refused to visit places like Compton because they vastly overestimated the likelihood of being the victim of a drive by.
Time to do a large scale helicopter drop of Kahneman books?
And more relevant to the present situation:
Individuals vastly overestimate:
1) their likelihood of contracting the virus
2) their likelihood of contracting and dying from the virus
To me: the common denominator in all of this is that we humans, by default, are not good with probabilities…totally at odds with the way we are wired. And, because of this, the media, twitter, etc. play a huge role in influencing what we believe is probable vs. not probable.
This kind of analysis vastly underrates the risks
1. I really have no idea what my likelihood of contracting or dying from the virus is. And neither do you.
2. Dying isn’t the only cost the virus imposes. My chance of dying might be acceptably low, but the chance of my elderly father with serious heart problems dying is alarmingly high. So if I ever want to be in the same room with my dad again, I have to be pretty careful about whether I’m exposed or sick, quite independent of whether it will lead to my death.
3. The network effects here are huge. Most people know and care about people who are high risk in some way. Even if you don’t care about them, having, say, a colleague get sick and die at work is a bad thing that imposes a cost.
If you read my post carefully, you will note that I used the word “individuals.”
I understand the network effects, but please don’t engage in the fallacy of decomposition – what is good for the group is not the same as what is good for the individual.
There are group risks and individual risks. They aren’t the same.
The fallacy of decomposition is completely irrelevant to what I just said.
You asserted that people overestimate their individual probabilities of dying and thus people must be bad at probabilistic reasoning.
I assert this is a not a reasonable conclusion because we don’t (or shouldn’t yet) have confidence in the numbers and because my individual welfare is affected in many ways beyond whether I get sick and die.
That is, people could be really good at probability and still not come to an obvious answer to the problem.
Hint: My 6 yo is not home from school because she faces any significant risk from the virus.
This has been known for quite some time. She’s home for other reasons.
On the contrary, your six year old faces a lot of risks and potential costs due to the virus beyond simply catching it and dying.
Pretty much everyone does, even as a completely individual matter.
I’m only aware of one justification for the lockdowns: to slow the spread of the virus so that the health care doesn’t become overwhelmed.
If you’re aware of others, then please let me know.
My 6 yo is at a low risk of ever having any severe health problems due to the virus. Again, she’s at home solely due to the reason above.
I think your points 1 and 2 are correct, but not your conclusion. People are vastly overestimating the probabilities, but not because they are bad at probabilities, but because we have little to estimate them on. We have little frequency or other experiential information on the question of pandemic in the West; the last few were a big deal in China etc. but nothing here. Then we actually get sick and dying here, and that is new.
The to that the social primate instinct to worry about what others worry about with a media whose role is showing us scary things and you have a new Summer of the Shark, without all the information about how many people were exposed to sharks and how many died, etc.
My predictions that the virus will be minimal in direct effects were not based on virus probabilities, but just my experiences with the media. I think everyone would do well to ignore or discount everything the media reports on for more than a day, especially if it is scary, while paying attention to things that stop getting reported on. I am not prepared to blame people’s understanding of probabilities for that, however.
“Somehow, we seem to be hardwired to think in binary terms–either we believe something will happen or we believe it won’t happen.”
It’s not that. Plenty of people of ordinary intelligence are really good at reasoning probabilistically when they care about the results and get some experience and practice. One sees it all the time in games of mixed chance and skill, or when people keep track of sports stats. Of course, life is a gamble, full of all kinds of needs to take chances and act under uncertainty, and hedge one’s bets to protect against unacceptably bad outcomes.
The issue is that our decisions are often binary, but inferring underlying understanding based on that is not right. “Should I carry an umbrella with me to work today?” Well you can either always carry one, never carry one, be random about it, or have some “binary threshold” above which you flip your choice. The last option is what makes most sense, and it’s what people do.
Not that people are consistent about it, but “lottery ticket mentality” couldn’t exist if people really interpreted very low probabilities as “won’t happen”, instead of in expected value terms – even if they are being delusional about that expected value compared to costs of the attempt (though the personal benefit in the joy of hoping and dreaming about winning is too often overlooked).
I think the issue here is one of “translation”. Some people are bilingual in a deeply fluent way. There are people who know the table of correspondence for basic vocabulary, then people who can communicate the gist of sentences in a conversation, and then there is a top level of people who can translate poetry, (though beauty and fidelity rarely go together), or even ancient poetry, and keep the rhyme and meter, by being skilled literary artists in both languages, knowledge how each one can be used by masters to use subtle techniques to convey meanings that, to the extent they are genuinely shared by different literary cultures at all, are carried by different combinations of imagery and allusions.
Look how different the dozens of attempts to translate just the proemic verse in the many English translations of homer. Personally, I prefer Du Cane, “Muse! Of that hero versatile indite to me the song. Doomed, when he sacred Troy had sacked, to wander far and long.” Indite would better be “dictate” for modern readers, but I would have used those two syllables for “please sing”.
My point is, there are people who are fluent in both languages for communicating notions of probabilistic reasoning, that of the “normal person experience and practice” and that of “numerical expressions and abstractions.” But bilingual people don’t know this – they think they are speaking one language, with maybe a slightly bigger vocabulary. They don’t realize other people don’t speak numbers. When they talk in numbers – the implications of which they have been trained to grasp intuitively – and other people don’t get it, they figure those other people don’t get probabilistic reasoning at all.
Handle-
Where can I find the data you mentioned a week or so ago about measuring the overall death rate from all causes? You sold me on the metric, but I can’t readily find the information. Thanks!
There are several “mortality surveillance systems”, here’s a place to get started specific to percent of deaths from flu. The question is whether pneumonia deaths are clearly exceeding expected overall deaths, or displacing other causes.
https://gis.cdc.gov/grasp/fluview/mortality.html
A few caveats with using that kind of data:
1. The national data is probably too big at the moment to extract any signals, given the spread is not the same everywhere. The normal level of deaths in the US is 9-10K per day, so there has to be a lot of excess deaths to not get lost in the noise. Better to use NYC data if one is suspicious about NYC reporting, since they got hit hard and early.
2. It takes a long time for all the deaths to trickle in, not just a lag from this year to last year, but there’s also usually a big jump looking back over the same week period from 2018-19 data to 2018-17 data.
3. NY State means “NY State minus NYC”, because NYC is it’s own “state” for data-tracking purposes
But if you look at NYC, boy, do you see an excess deaths signature. I’ll list out last year vs this year, by week, for NYC and (NY State+NYC).
“Week 10” was March 1-7
10: 1082 vs 1080 (3347 vs 3120)
11: 1064 vs 1101 (3101 vs 3040)
12: 1009 vs 1353 (3111 vs 3388)
13: 1093 vs 2474 (3156 vs 4860)
14: 1025 vs 4408 (3068 vs 7590)
We’re in week 16 now. Week 15 ended April 11, but the numbers for it aren’t all in yet.
But from the three week period from weeks 12-14, it seems totally reasonable to me to attribute 6,500 premature deaths in NY State to cv19 – over 300 excess deaths per day for that period.
Let’s compare that to the NY State Department of Health tracker which covidtracking follows:
March 16: 7 deaths
April 4: 3,565 deaths
A difference of only 3,560 – so a significant undercount by around 3K.
Which fits with later reclassification of around 4K deaths.
One question is whether the lockdown is worth it in cost-benefit.
Let’s say we lose about $1M per premature death (maybe $100K for 10 QALYs). New York State is on track to lose around 25K, so $25B
Let’s also say that the lockdown will save ten times that much. That’s kind of crazy, because that would be 275K cv19 deaths in a state with 19M population, so a high-range fatality rate of 1.4% and literally everyone in the state getting infected. Still, that puts a ceiling on savings at around $250B.
But New York State GDP is $1,750B, and the big banks say GDP has collapsed more than 30%, so more than $500B lost in a year.
Now, a lot of people are not going to die, but they are going to get very sick and have long-term harm, and maybe it you add up all the long-term harm avoided, you get up to the range of GDP destroyed.
But even then, I’m guessing it’s still a stretch to break even.
Not only the data source, but bonus analysis!
Thank you!
I think this is a very useful way to think about it because it drives us to a useful proposition. Establish the “binary threshold” for action, and communicate them in a reasonable way.
In most cases, something either will happen to a particular person, or it won’t happen to that person. So far, so good for binary thinking. The tricky part comes in evaluating — on a personal level — the factors that might cause a thing to happen or prevent it from happening. Aggregate statistics are of almost no use in that regard. A person who doesn’t fly or travel more than a few miles a week by auto (i.e., a retiree) is in an entirely different boat than the frequent flyer, daily commuter, or avid RVer. The anecdote about Nate Silver merely underscores the uselessness of most probabilistic thinking. There is, moreover, the big problem that most “probabilistic” statements are really probabilistic: they don’t represent the frequency of occurrence among repeated trials of identical events.
Yes. When we did a mall-intercept study at the SEC to see how people understood mutual fund expenses, we had two examples, one with expenses of 1.38% per year, and one with 0.60% per year. People were flummoxed by the 0.60%. Especially if they got that one first. They did not know whether it was sixty percent, six percent, or six-tenths of a percent. When we showed our results to the social psychologists, they said, oh yeah, people really struggle with numbers where the significant digits are on the right-hand-side of the decimal.
Bankers invented interest rates to move the decimal.
Investment bankers invented basis points to move it some more.
Same applies to climate change as Marty Weitzman emphasized. Small chance of catastrophe does not mean don’t worry.