Some thoughts on the school shooting.
1. I think that the three axes serve to predict reactions fairly well. Libertarians, concerned with freedom vs. coercion, are skeptical of adding to government power in an attempt to prevent school shootings. Conservatives, concerned with civilization vs. barbarianism, stress the derangement of the shooter. Progressives, looking for an oppressor, have identified the gun lobby. UPDATE: Russell Nieli, who I would categorize as conservative, blames absentee fathers.
2. If it is possible to build a car that is smart enough to drive itself, should it not be possible to build a gun that is smart enough to know when not to shoot? (a camera knows where the gun is pointing… software analyzes the image to determine whether it is a morally acceptable target… seems like the only kind of gun I would want in my house)
3. I think that the reaction to Newtown may tell us something about the psychological forces that incline people toward what Michael Huemer calls political authority. Huemer asks why we tolerate coercion from agents of government that we would not tolerate from private individuals. My hypothesis, based on Newtown, is that people are much more upset by danger that appears sporadically, anonymously, and unpredictably than by danger that is constant, identifiable, and predictable. So I think that one reason people accept government coercion is that it is relatively constant, the government’s leaders are identifiable, and their actions are fairly predictable.
(I have taken one pass through Huemer’s new book. I found it very stimulating, and I thought it was worth the high Kindle price. In the future, I will give a much longer analysis, not entirely favorable, of Huemer’s line of thought.)
1. I would say that Libertarians, like liberals, are looking for an oppressor. Well, we aren’t looking, we found it and it is government. Now if we could just show the liberals who the oppressor is, we could grow the movement by leaps and bounds.
2. The last thing a free society wants is the government determining through gun regulations, what is and what is not an acceptable target. You can rest assured that no agent of that government would be on the list and that is kind of the whole point of having guns in the first place–to be able to shoot agents of your own government, should the government be taken over by a tyrannical leader. You may have just stumbled upon a way for governments to take guns out of the hands of the people (as far as they are concerned) with far less protest.
3. Generally the government will only bring active violence upon you if you do certain things. Where you see people getting mad at government is when they add a bit of randomization to the violence. For example, when the police beat, shoot, taze etc. innocent people.
1 revisit: Liberals think they have found their oppressor too. The government has told them that business is the oppressor and for some reason, they believe it. Related to business, the 1% are also the oppressors to them. Again, the most important step to a free society is to show liberals how the government is the oppressor.
When your life is threatened by an attacker, do you really want your weapon to be a complex artifact, depending for its function on correct programming and well-charged batteries? If there are people in your house whom you don’t trust to make the shoot/no-shoot decision, a better solution would be to store the gun in a safe, bolted to the floor, with a combination known only to you.
A computerized gun would make sense only if you trust it to make better decisions than anyone in the house, yourself included. I’m skeptical.
Point 2 (guns with auto target evaluation) leads to significant issues around choosing appropriate targets. Many technical concerns can be overcome, but the legal and political questions are bigger. Self driving cars have a few of the same issues, but to much lesser degrees since they should be avoiding obstacles, not selecting which one to hit.
Who defines an acceptable target?
Could be government, manufacturers or individual owners. Along your 3 axes, each group would have a problem with one or more of those choices.
Opressed / opressers – some would say the gun owners are the opressors, so the government has to make the decisions to protect everyone else.
Freedom / slavery – clearly the individual has to be free to choose acceptable targets on their own.
Civilization / Barbarism – depends on who is a civilizing force.
Who is responsible / liable if the system makes a mistake?
A gun mistakenly allows a kid to be shot, or locks up so an old lady is beaten by a mugger. Either way, significant liabilty for the manufacturer if there was a technical problem. And political fallout if the government defined the target badly (no shooting anyone in uniform, and the bad guys wore a fake uniform).
Should all non-smart guns be outlawed?
Obviously a big freedom concern. But does absolutely nothing about intentional murder otherwise.
How large would the voluntary market be?
The guns would be bigger / heavier, and add at least a small amount of delay before firing to evaluate the situation. And they would have to be significantly more expensive initially. A parent in a home defense situation would be willing to pay some premium to avoid shooting family and friends. And hunters would be interested in ensuring that they didn’t confuse another hunter for a deer. But government agencies, people with limited income to spend on self defense, concealed carry holders who carry all the time, and anyone at all concerned about government oppression would probably prefer the simpler and more reliable system.
What should happen in situations where the automated system fails?
Comes back to who defines the targes, and could be due to smoke / fog, darkness, etc. If the gun will fire in unknown circumstances, it would be easy for bad guys to fool the sytem and use the gun anyway. If it won’t, it would be easy for bad guys to fool the system and prevent the good guys guns from working (smoke bomb, attack at night, etc).
I think we are much further from a system like this than we are from self driving cars, and commercial concerns would push it back even further. But an app for your google goggles in might be able to provide a lot of the benefits without gun integration and much sooner. Comes on when you hear gunfire, or manually activated when you hear a potential home invasion, locates people by sound or vision, and identifies them – by name and a preassigned risk if known, and degrees of likelihood to be dangerous for everyone else. Pinpoints gunfire direction and source Could even integrate into warrant and watchlist databases, although that opens more questions.
As a libertarian, I must admit this is where the optimistic “utopian” kicks in: I look at the shooter and ask, how can society engage this isolated individual in meaningful social/economic ways, so that he (or she) will not want to shoot others?
Two quick questions: (1) Did he choose to do it? (2) Was it evil?
Thomas L, yes to both and to the punishment it warrants, however many shooters such as this take their own lives. Up close and personal I judge violent action much as anyone, and some people will go off the deep end no matter what. It just seems that the number of “lost causes” does not have to be as high as it is now. But making a world in which fewer people end up isolated is not simple, nor is it an answer for any violent act.
Define “morally acceptable target”? You are deciding beforehand that some traits will indicated someone who although is an imminent threat of death or serious bodily injury to you or other that you are willing to let kill you? Based on appearance. And you are turning that decision over to some software that will decide, in seconds (a lifetime in a self defense situation) whether to permit you to defend yourself? As you know, software never fails, never is spoofed, never is compromised, never returns a false positive or negative, so why not put your life in the hands of some programmer? And, perhaps not you, but someone will want to impose that limitation on others, deny them the decision to decide whether to stop the threat or die, but rather must depend software and some ambiguous “morally-acceptable target” determination. On the other hand, the blue screen of death would really bring death, or rape, etc.
Scenario: Your 12-yr old daughter is being attacked by a baby-faced 14 yr old. She reaches your “morally-acceptable target” determining firearm, but darn, it’s set for targets over 16. Your daughter dies but with the knowledge that she is the morally-acceptable victim who was denied the right to defend herself because her killer had child-like features.
I didn’t relate the scenario to be inflammatory, only to demonstrate that you can’t decide morally-acceptable targets that will not be fired at without also deciding morally-acceptable losses, all beforehand without context.
The self driving cars currently have roughly four desktop computer’s worth of computing stuffed in the trunk. So no, target-aware, self-limiting guns are unfeasible for the foreseeable future.
“If it is possible to build a car that is smart enough to drive itself, should it not be possible to build a gun that is smart enough to know when not to shoot?”
No. The car is simply making decisions about how to get where you are going, with some heuristics on what physical objects to avoid on the road. You want the gun to make moral decisions, and extremely difficult, split-second ones at that. If we ever have software that can do that, I suspect it will be so close to artificial intelligence that we will not have to worry about guns: we could just have these AIs installed in robots that stop the shooters before they can fire, if these sentient robots don’t just kill all us puny humans first. 😉