In the United States, the average number of automobile accidents per year is 5.25 million (source). The average number of fatalities per year is 30,000 to 35,000. (source).
How many accidents are we willing to tolerate involving self-driving cars before we stop trying to restrict their usage? Pretty much zero, right?
Let’s call this “asymmetric intolerance.” We accept a phenomenon that is highly flawed (human-driven cars) while we refuse to tolerate a phenomenon if it has any flaws at all (self-driven cars).
If asymmetric intolerance had been a policy principle 125 years ago, might we not still be transporting ourselves in horse and buggy?
Some further thoughts:
1. Maybe this is in line with the issue of resistance to change that is a theme of Tyler Cowen’s latest book.
2. Is an obsession with terrorism an example of asymmetric intolerance?
3. I have a relative in California who buys “organic toaster pastries” (non-GMO, of course). In other words, Pop-Tarts that have been blessed as all natural. Isn’t that an example of asymmetric intolerance?
4. Where else do we observe dramatic examples of asymmetric intolerance? Or is this the only example that comes to mind?
Inability to tolerate a recession and/or a cleanup of the banking system because it may result in some loss of wealth.
1) We don’t know that but I still think self-driving cars are further away (~15 years) because they have not solved a lot non-optimal driving conditions. These are way tougher to program. I also think it will come from taxi companies where the liability and cost are on one side.
2) How about Donald Trump’s EO on banning travelers and immigrants from certain nations?
3) Yes…but isn’t mostly marketing adverting and consumers making choices to charge higher prices? Why do you care about this price discrimination?
4) Asymmetric intolerance – Let us middle class definition of good schools and teacher unions.
I don’t think it’s asymmetric at all. People look at the 30-50K number and realize that the bulk of these are self-inflicted, many due to drugs or drinking or other behavior they won’t engage in. The number of people who are killed in an automobile because of someone else’s irresponsibility (in a second car) is much smaller. But any death caused by self-driving cars will be random and outside their control.
Keep in mind that most people know full well that optional self-driving cars lead inevitably to mandated self-driving cars to, likely, more expensive cars that people can’t easily purchase. Tyler is utterly incapable of understanding that most Americans love their cars. So why end that simply because a bunch of elites don’t like to drive and a bunch of businesses want to fire a bunch of employees? So the tolerance for error will be non-existent. But not asymmetric.
Resistance to terrorism is similar. People have a built-in tolerance for random events, and terrorism caused by outsiders dramatically increases their sense of danger and at least slightly increases the probability, no matter how small, that they’ll be killed. Add that to the fact that immigration is being imposed on Americans by their government, and that increases the sense of resistance. Why should we have to tolerate even a slight increase for something we don’t want anyway, the thinking goes.
And whatever you call it, it *was* around back then, too. People loathed the barcodes when they came out. There was a huge controversy about it, and for years, grocery stores were required to put *both* prices and codes on the item. Given that the barcode led to Walmart’s dominance, I’m not sure everyone was wrong to despise them. RFID tags still aren’t here, in large part because of public resistance.
Not sure why you think toaster pastries are impossible to be organic. But no, that’s virtue signaling.
It’s basically the fundamental attribution fallacy. We could change traffic rules and infrastructure to reduce deaths, but people mentally set those things aside as separate issues. Indeed no engineer blames human error except when they are trying to cover their a__. Engineers quantify human error as part of they system, then design around it. Politicians claim human error is somebody else’s fault. They seem to particularly love attacking engineers for quantifying human error in their attempts to mitigate its effects.
I think innumerate fear of terrorism is a little different. There is no “benefit of the bargain” as there would be with driving, where we assume some risk of calamity in exchange for ready transportation. Also, the costs of driving precautions tend to be internalized, as opposed to security restrictions which tend to be externalized (but not always, see, e.g. going through security at the airport). That all adds up to make us more cautious than perhaps we ought to be (although the counterfactual is impossible to know).
Aren’t you just describing status quo bias? We tolerate the risk of cars because we’ve demonstrated that it’s tolerable?
While terrorism is sometimes handled asymmetrically (the TSA is a monstrosity not worth the protection is provides), the main problem with immigrants is they are a net negative for the country and terrorism is the one socially allowable way to discuss that. As you say, there is no benefit on the other side of the balance sheet that makes putting up with terrorist attacks worth it.
Imagine for a minute that NE Asians had all the same characteristics they have today, but a few random ones occasionally committed acts of terrorism. It’s not that far off, we occasionally get Asian mass shooters like Virginia Tech. Nobody talks about Asians the same way as Muslims though, because they know these are lone wolfs that don’t have much to do with day to day behaviors of average Asians.
By contrast Muslim terrorism is simply an acute manifestation of all sorts of problems with Muslims. The social hostility and behavioral patterns common throughout the entire population are simply made manifest in a single moment, but its because they are a reflection of underlying day to day problems that they get attention.
I see it all the time in the corporate world. Some process or system has flaws, you propose something that is better – but not perfect – and there is an extreme reluctance to move to the next best thing because it isn’t perfect.
It’s a classic example of availability bias – which is what you are describing. The availability of scare stories for terrorist attacks, autonomous vehicle crashes, airplane crashes, GMO food scares, etc. is high and thus prevalent in our brains despite the incredibly low probability of the occurrence. This prevalence leads to an absurd overestimation of the risks that doesn’t even come close to comporting with the facts. Politicians and news media feed on these low probability events because they are more interesting than something that happens everyday, and then we end up with legislation targeted at fixing a very small issue. Also, since I’m in business, given the above, you shouldn’t be surprised by how powerful anecdotal evidence is in business meetings despite underlying data. That is the power of the “what if this were your child” argument. Humans love stories that help them justify their fears and prior beliefs, and anecdotes feed right into that.
Alcohol vs. marijuana, though I think arguments that alcohol “causes” violence and bad behavior are not correct—it lowers inhibition, doesn’t change character.
Distracted driving vs drunk driving.
It may also be a subconscious pricing-in of the unintended consequences of a change. Better the devil you know.
One day soon, at 11:59 a.m., they will be forbidden, then at 12:01 p.m. they will be mandated.
Robin Hanson’s classic comes to mind:
http://www.overcomingbias.com/2008/09/politics-isnt-a.html
Of course, a trap here is that when one realizes that X isn’t about its ostensible justification Y, it’s tempting to conclude that X is deceptive and evil and should be reformed or abolished, especially if one is inclined to dislike X in the first place. Whereas it may also be that X is really about Z, and Z is highly desirable, but could never be achieved in reality due to reasons of public choice, bias, or whatever other obstacles.
I’m not arguing here whether this applies (or not) to any of your specific examples. But it is something to consider before rushing to condemn a policy of institution when one realizes that its ostensible justification is phony.
In comparing the safety of self-driving versus human-driven cars, the relevant statistic is accidents/deaths per mile driven, not per year.
The number of car accident fatalities in the US in recent years is about a dozen per billion miles traveled; Google’s experimental autonomous cars drove a bit more than a hundred thousand miles last year. So if just a single fatal accident had happened with one of those cars, it would absolutely be correct to consider that a disproportionately large number.
So perhaps this isn’t the best example of asymmetric intolerance or availability heuristic.
Except that over the hump, it is hard to imagine the autonomous driving not offering a giant safety improvement along with other benefits.
Perhaps, but how large is that hump, and how far away? And do you want to share the road with these things while the bugs in the 1.0_beta release are being worked out?
If the argument is “one fatality per billion miles is too much for autonomous vehicles, even though the rate for human drivers is more than ten times as high”, that’s asymmetric intolerance.
If the argument is “we won’t allow these cars on the road unless they are at least as safe as human drivers” (which then creates a chicken-and-egg problem because the technology needs to go through a widespread adoption phase in order to make it out of infancy) that’s arguably a different issue. Perhaps still a fallacy, at least from a long-term what’s-best-for-humanity point of view, but no longer a matter of judging with different standards.
If you had a kid reaching driving age every year for the foreseeable future at what year would you accept the best autonomous driver software rather than them driving themselves?
I think I’d do it this year.
The concern you cite is a proper one given the sample sizes at play. Lets say there are 253 million cars in America so the 5.25 million accidents you cite are 2.1%. How many self driving cars are being tested on the roads right now? 100? In that case, more than 2 accidents would indeed represent some possible issue with self-driving cars, or at least be a justifiable cause for looking more closely at the matter.
This happens all the time in industry. When a product is in full production, failure rates may be 1-2%. But in the prototype phase you cannot allow even one single failure, or you reject the design.
> If asymmetric intolerance had been a policy principle 125 years ago, might we not still be transporting ourselves in horse and buggy?
Not sure either is an example of asymmetric intolerance, but had we been more intolerant of death by automobile 100 years ago, urban speed limits would be far lower and walking in the street would be acceptable https://www.researchgate.net/publication/236825193_Street_Rivals_Jaywalking_and_the_Invention_of_the_Motor_Age_Street
I know someone said alcohol vs marijuana, but I think a better example is: smoking tobacco vs smoking marijuana.
You might think I mean that tobacco (legal) is better tolerated than marijuana (mostly still illegal) but I mean it the other way around.
At a social acceptance level, smoking marijuana is probably more socially acceptable in many areas than smoking cigarettes–which is quite nearly the *least* socially acceptable thing one can do.
Incidentally, I agree with education realist that the human-driven/computer-driven car example is not really asymmetric.
If the only concern at all that people had was to get from point A to B safely, it might be, but that isn’t the case.
People like their cars and they like driving their cars. How large is the chauffeur market really? Is the monetary cost of a chauffeur the only reason people choose to learn how to drive and drive themselves?
If so, then pushing the chauffeur cost to zero (which is basically what a self-driving car does) then it is what we have all been waiting for.
I am seriously skeptical that, whether suddenly or gradually–losing control over one’s ability to travel freely is what all but the smallest handful of people have been waiting for.
Even if everyone has moments when they wish they could just take a nap or whatever instead of having to drive, taken on the whole, they like driving, and see computer-driven cars as an attack on their independence more than a dream come true.
True. The functional superiority of self-driving cars isn’t enough to lead them to become the norm. We already have the technology to make restaurant waitstaff completely redundant, yet people still like going to restaurants and having other people bring them their food.
The computers aren’t driving the car. You’re driving the car. It’s not as if a computer is talking to your mother when you call her on the phone. It’s not as if a computer is cooking dinner when you use a microwave.
I love visiting my son and his wife and daughter, and using it as a jumping off point for a lot pleasure driving, visiting interesting places that are far from here. But I hate the drive to get to him. Three days of blah. If a self-driving car would take us there in 24 hours of straight driving, my life would be substantially improved.
I think Education Realist has it.
If cars are self driving, will they then refuse my desire for a drive along the beach? Refuse my desire to park out of site of someone’s house so I can surprize them at a birthday party? Refuse my desire to avoid the city limits of Seattle because their government is run by jerks?
The “Urbanist Planner Gang” who want very high density and lots of local government revenue hate cars, above all I think, because cars give people real freedom that bicycles and walking and mass transit do not and will not.
Trying to remove that freedom will, I at least hope, prove very difficult and costly indeed, and likely have backfire consequences planners will hate.
(If I can safely sleep in my car, why not live REALLY far from work, and finish sleeping while the car takes me there? If a person can safely just ride in a car, how will we round up the infirm and force them to reside in little apartments and nursing homes near bus lines?….)
Firstly, don’t drive on the beach, even where it’s legal. Let other people relax in peace, watching the waves, tossing a football around, whatever. Or you want to drive over football fields too?
The car doesn’t know about the birthday party. The car doesn’t know why you’re not driving into Seattle. The car doesn’t have a sponge to wash your father’s back. And freedom is not the word that springs to my mind when I think about being stuck in traffic.
Consider also alcohol use.
There’s a reasonable argument that society would be much safer and people’s lives longer and perhaps happier if alcohol was banned.
But a large pool of individuals forced the repeal of prohibition, I presume because of them judged alcohol’s value to them personally to be much more important than any diffuse social value arising from abstinence. We see the same in marijuana, and to some degree cocaine, heroin, etc.
Gun control is a much messier argument, but certainly a point of view that “regardless of what elites say, *I* am safer when *I* have a gun” is an important political force (with some factual justification.)
And so, will the acceptence of self driving cars mean that we *have* to let them run red lights, run some stop signs, and speed? Or else adjust the laws to make those things explicitly legal? One complaint against current self driving cars is that they are *too* safe, courterous, and law abiding, to the point of creating serious issues for everybody else.
Likewise, if self driving cars are perfect, and thus generate no ticket revenue and no repair revenue, will police budgets, repair budgets, tax bases, and so forth survive?
The only reason we have red lights and stop signs in the first place is to cope with a problem created by the old, stupid, antiquated way of driving a car.
I think this shows up in a number of different ways when it comes to healthcare and drug approvals, but Scott Alexander makes the case more eloquelently than I: https://slatestarcodex.com/2015/04/25/nefarious-nefazodone-and-flashy-rare-side-effects/
Basically, the downside risk for a doctor or the FDA making a decision (especially something different than historical practice, whether it be prescribing an unusual drug or approving a new one) exposes the doctor/regulator to a bunch of down side risk that may be more than balanced out by the diffuse benefits for society, but doesn’t make sense for the decision maker to accept.
I think with the original self-driving car example, there’s sort of the inverse of a coordination problem, where the existence of a couple companies behind the wheels of all those cars creates a culprit you can pursue for all the accidents, while the diffuse responsibility in the status quo prevents similar outrage for human operated vehicles.
Our justice system treats first degree murder differently from accidental manslaughter. This is quite literally “assymetric tolerance”, but it is deliberately so. Caplan daftly calls this “innumeracy” as if the architects of our legal system fail to recognize that that a single death is mathematically counted as one single death no matter the circumstance.
The KKK only killed a few thousand people across it’s entire existence. Is that death tally the ideal metric to measure it’s impact? Of course not. Terror is used to successfully scare and intimidate people. It works. Recent Islamic terror in Europe has changed the political structure.
Self-driving cars are basically a win-win for everyone. The issue isn’t tied into deep political conflict. The issue of terrorism is tied into the western immigration issue which is probably the most political divisive issue of our day.
With self-driving cars, the benefits are too obvious and tangible to win out in the end. People tolerate self-driving roller coaster deaths, they will tolerate self-driving car deaths. Sure, there will always be irrational people. But the benefits and trade offs are apparent to even normal people.
That is the problem when you don’t know enough history. It WAS the policy back then. Horseless carriages required flagmen to proceed and follow them. People learn. They forget too.
As for other examples, inflation still hangs around though dimming, debt is a quasi example, only mattering when spending is on something disliked.
He knows.
During the height of the drought last summer, some Angelenos aggressively used social media to “drought shame” their neighbors for over-watering their lawns. But residential use is a small part of the California water use profile. Agricultural use is at least 4x that of residential use, and most of California’s agricultural products are exported – in essence, exporting the state’s water. That in and of itself is not a problem. The issue is that agricultural water is cheaper in California, where it is relatively scarce, then it is in Oregon, where it is relatively plentiful. Because ag water is so cheap, it encourages the use of wasteful practices like flood irrigation rather than drip irrigation. So even if I point out that your quart of almond milk contains as many gallons of water as your neighbor’s lawn care, the outrage doesn’t transfer.
Some of us have been arguing for a significant increase in California agricultural water rates, but the ag lobbies are very powerful in Sacramento, and it is much easier to gin up anger over the lush landscaping of Celebrity X.
And for what it’s worth, as a ranch owner, I am somewhat arguing against my own interests here. Short term, anyway.
Let me suggest that the source of much asymmetry is Loss Aversion. We are unwilling to risk all that we have for a merely equivalent reward. We need the promise of double or more. ?? Exceptions for entrepreneurs ??
Regards,
Bill Drissel
Frisco, TX