Check out the tenth podcast from Bret Weinstein and Heather Heying. Skip the first 4+minutes, which only have music.
I noted the following:
1. A different flaw in the Santa Clara study.
2. Scans can show severe lung damage in people who report no symptoms
3. Their subjective probability that the virus was engineered in a lab has increased (they do not quantify by how much, but I think it is a lot).
4. They cite instances in which odd corners of the Internet are outperforming mainstream science and mainstream journalism. This comes through most in the last few minutes of he podcast.
The epistemology of this virus is fascinating. Some experts believe that about 3 million Americans have been infected, and other experts believe it is more like 30 million. Some experts focus on what it does to lungs, while others believe that it attacks the body in other ways. There is controversy, particularly since yesterday, concerning whether having the virus confers immunity. There is disagreement over how accurate tests must be in order to be useful (although perhaps I am the only one arguing that the current level of accuracy is insufficient).
I approach epistemology as a logic puzzle. If I believe A, B, and C, does that mean I have to believe D? Or if I become convinced that D is false, what do I have to do with my beliefs in A, B, and C?
Sometimes, as in (3) above, I use subjective probability as a shorthand. But I think of myself as having a complex, interconnected set of beliefs, so that I am reluctant to express any one belief as a subjective probability. This notion of complex, interconnected beliefs sounds to me as though it relates to Quine, but I don’t feel sufficiently well acquainted with Quine’s ideas to implicate him.
In my epistemology, I have contempt for computer models. I will spell this out more below.
I first used a computer in 1970 when our high school used a time-sharing connection to a mainframe. My freshman year of college, our International Relations class played a multi-day simulation game where we acted as humans but a computer program dynamically gave us results from decisions. Either the next year or the year after, Jeffrey Frankel and I were the assistants working on the program. It took a lot of maintenance, and we worked a couple of long nights fixing it.
The summer after my junior year in college, I worked for a professor helping to prepare a macroeconometric model for use in a class. I had a hard time getting the Phillips Curve to fit to recent data–not surprisingly, since the Phillips Curve was in the process of breaking down.
When I graduated college, my first job was as a research assistant, at CBO. Each research assistant in our section at CBO worked with a different model–there was the Chase Econometrics model, the DRI model, and a Wharton model. Each of them had peculiarities, and Alan Blinder, the economist in charge, grew to dislike them. Instead, he favored the Fed’s model, which I was tasked with setting up. This was made quite difficult by the need to trace through the code, use the correct IBM JCL, and physically walk back and forth from CBO to the House Administration Committee staff office, which was where the only computer with the power to run the model was located. In the process of figuring out the computer code, I impressed some Fed staffers and was able to get a job there.
During graduate school, one of our classes was taught by Ray Fair, using his macroeconometric model. The students thought it was a joke. His approach had no credibility.
When I left grad school, I got another job at the Fed. After bouncing around different sections for a few years, I ended up spending one summer in the International Division, trying to figure out an oddity in the way that an exchange rate shock worked through the Fed’s macroeconometric model. I wrote down my own little back-of-the-envelope model that had just a few equations, which showed a different result. After much work with the Fed’s 200+ equation model, I figured out why it was getting a different result. Their model was using the wrong price index to calculate stock market wealth, which was an important determinant of consumer spending. Once that was corrected, the results were closer to my back-of-the-envelope calculation.
In all of this work with models, no one ever trusted a model to do forecasting. The proprietary models–Chase, Wharton, and DRI–all were manually adjusted by the economists running them to come up with reasonable forecasts. What customers were paying for were the fudge factors put in by Mike Evans or Otto Eckstein or whomever. At the Fed, the forecast that the policy makers relied on was a purely judgmental forecast, with a computer used to make sure that accounting relationships were satisfied.
The bottom line for me is that there is a paradox of computer models. If you understand why a computer model gets the results that it does, then you do not need a computer model. And if you do not understand why it gets the results that it does, then you cannot trust the results. If you are using a computer to try to figure out causal structure, you are using it wrong.
So I bristle when someone says that based on a computer simulation, a certain policy for dealing with the virus can save X lives. I presume that there are some key causal assumptions that produce the results, and I want to know what those assumptions are and how they relate to what we know and don’t know about the virus.
Consider the WSJ story on France.
Mr. Macron, the son of two physicians, mobilized France’s hospitals to prepare for a wave of Covid-19 patients the government feared would overwhelm hospital capacity. He requisitioned masks and other protective gear from stores and businesses across the country to protect nurses and doctors working on the front lines. And his government equipped the nation’s high-speed trains to zip patients from hard-hit regions to hospitals with open beds.
The hospitals survived the onslaught, but they didn’t bear the full brunt of the virus.
Instead, the virus slipped into France’s national network of nursing homes.
The most widely-used models don’t differentiate the population by age. Blinded by these models, policy makers focus excessively on maintaining hospital capacity and inadequately on protecting the elderly.
I think this is a good (and overlooked) point about models.
Everyone knows the phrase “garbage in, garbage out”. However equally important in any modelling exercise is understanding what the model can and can not predict!
When I’m training analysts to do models and simulations I emphasize two uses:
– Understanding if certain outcomes are possible or plausible
– Refining our thinking of potential outcomes
I always tell them that they should never say “the model says X will or will not happen” because of the limitations inherent in modelling and any model’s built-in biases and assumptions. If you could model the world perfectly, as you said above, you wouldn’t need the model!
Instead, they use the model as a guide to inform them whether their thinking makes sense. If X customers and Y growth and Z word-of-mouth leads to more revenue than the whole company makes in a year, it is likely that there is something wrong with their thinking, and they need to go back and examine the causality and relationships before tweaking the model! It’s a pedagogical tool, not a source of truth.
Unfortunately, too many people still hear the word “computer” and imagine something that’s infallible.
“Scans can show severe lung damage in people who report no symptoms”
Was that referring to this?
As someone who is building models at Google, what you said deeply resonated with me:
“If you understand why a computer model gets the results that it does, then you do not need a computer model. And if you do not understand why it gets the results that it does, then you cannot trust the results. If you are using a computer to try to figure out causal structure, you are using it wrong.”
This is exactly correct in my book. That doesn’t mean models are useless or that we shouldn’t build them, but it means that their primary use cases are intuition pumps and ways to run thought experiments. I find Michael Nielsen and Bret Viktor’s work in this space relevant.
Note that I think this observation applies broadly to cases where any realistic assessment of the problem contains substantial unknown unknowns that can affect the results, including nearly every problem with humans directly in the loop. There are problems in the physical sciences or similar where the story is different.
Among the four possibilities, the least important is usually the unknown unknowns. (Here’s a challenge – name 3 important “unknown unknowns”. This virus, is NOT one of them.)
The biggest problem with most models is that so many known knowns are assumptions of data and assumptions of relations between data, and the reality is out of the ranged of assumed values; or the relation stops holding. These knowns are actually wrong, or intermittently wrong/ don’t hold.
I’m not convinced contempt is the right feeling to have – but I don’t trust most models for big complex things, like the economy or climate or future prices of anything traded. Smaller and more constrained models, like how a car engine works or fails, are more trustworthy.
On the “developed in a lab” question, that might well be an unknown (to us) known (to others, who are basically evil).
In many ways a model is a philosophy and vice versa. Perhaps the great Thomas Reid said it best: “I despise philosophy and renounce its guidance, let my soul dwell in common sense.”
Mam, are you playing possum on me?
https://youtu.be/7IiA3p6XW0A
Consider the implications of, “Their subjective probability that the virus was engineered in a lab has increased (they do not quantify by how much, but I think it is a lot.” They’re smart people but I suspect there are many, many folks at least as smart around the globe. It’s unlikely that two folks doing a video from their basement are smarter than everyone else in the world. So, either (a) the very, very smart people around the globe disagree with them, (b) the very, very smart people around the globe do agree with them. If (b), whoa. Is the world powerless against China? Maybe. Was the West very aware of what was going on at the Wuhan laboratory? Was there a ‘mole’ at the site? If B and all western governments are pretending otherwise, something doesn’t add up.
I think there’s about a 0% chance it was engineered, although I give a small, non-0 probability to the possibility they were researching the virus in the Wuhan lab. I give the remaining probability for the origin of the outbreak to consuming wildlife.
They talk in the podcast about how combining parts of different viruses in labs to create novel strains is apparently a routine part of virology research, don’t ask me why. Apparently, the clue is that Covid 19 contains stuff that looks like it came from a bat virus and other stuff that looks like it came from a virus that affects pangolins, and for that to happen in nature, both viruses would have to have infected the same cell in the same creature at the same time. That seems a bit unlikely, because pangolins and bats don’t cross paths all that often.
I have no opinion on the matter, just relaying what was discussed in the podcast.
Thanks! I suppose that would be in line with my thought that people were studying it. I suppose I really meant that I didn’t think it was being engineered as a weapon and released on purpose. Still not sure how plausible it is to have been engineered for any purpose though.
This podcast interviews some scientists who suggest it’s very unlikely to have been engineered.
But I guess I’m not an expert in this. If we apply base rate reasoning on viruses that affect people in general, it seems like the prior on it being engineered should be low.
I agree that ought to be the default. But given that is emerges from miles from the only city in a CONUS sized nation where there are biocontainment labs? Doing research on the same bat and it’s viruses? And this species naturally live only several hundreds of miles away? Odds of this naturally occurring are lowering and the odds of this occurring through the intervention of a human hand rise exponentially. Then you add xi’s multiple deceits withholding knowledge of human to human transmission for maybe one month?
Do you have a good link (from a reputable outlet) to share on the lab? I’d be interested to read more.
Maybe it’s just me, but I feel like a comment section is degraded when someone uses a term or abbreviation that most people don’t know. It’s just bad manners.
How many people saw CONUS and read “continental United States”? How hard would it be to write the latter rather than the former?
Arnold;
if you want to be very concerned, look up ‘exercise intolerance’ in the context of SARS – 1
Arnold:
If you need reinforcing your epistemology, read
https://arxiv.org/pdf/2004.08842.pdf
2. Yes. That is what monitoring yourself with a cheap pulse oximeter is so important: https://www.city-journal.org/fda-blocks-apple-watch-blood-oxygen-feature
And note too here that the FDA is again not helpful.
You’re scepticism of computer models reminds me of this discussion about models between Freeman Dyson and Enrico Fermi, https://www.youtube.com/watch?v=hV41QEKiMlM
According to Dyson, Fermi’s scepticism of a bad model kept Dyson and his graduate students from wasting years of effort chasing a dead end.
This post prompted me to publish a short essay on discovery, feedback and signal-to-noise ratios. There may also be banana peels.
https://medium.com/@lorenzomwarby/pandemic-epistemology-81f08d667cb6