|Arnold Kling Essays | Short Book Reviews | Favorite Links | Internet Bubble Monitor | Home|
Oct. 31, 2000
In a famous recent article, Bill Joy argues that genetic engineering, nanotechnology, and robotics create two types of frightening scenarios:
One way to grasp the formidability of the challenge posed by Joy is to use the following metaphor: what if God announced to us that starting in the year 2010, total annihilation of mankind would result from ANY instance of one of the following:
I find it useful to elaborate on these three metaphors. I will try to draw out what each problem might teach us about how to respond to Joy's warning.
This reminds me of the arguments over gun control. One side argues that we should focus our policing activity on guns. The other side argues that we should focus on people with unsavory backgrounds. In a world where the vast majority of guns will remain on the street under any regime, it seems to me that we have to do some of both.
(If you think that the solution is draconian gun control that takes away virtually all guns, then you may be right about guns, but you are missing the point. In this context, that would be like enacting a total ban on science.)
The gun control metaphor yields the following lessons:
(If you think that Jackie never would have an affair, then you may be right about Jackie, but you are missing the point. In this context, we never know when someone may have uncontrollable urges.)
(If you think that Jackie ought to be free to have sex with whoever she pleases, then you may be right about sex, but you are missing the point. In this context, if she brings another man to orgasm, life as we know it will cease to exist.)
It turns out that existing technology goes a long way toward solving this problem. We already have installed ELLEN, which is an acronym for Ellen, our neighbor next door. Ellen is home most of the time, and she can see everything that goes on in our house. She knows when Jackie is home, she sees who knocks on the door, and so forth. Ellen is endowed with tremendous curiosity and a propensity to talk. As a result, Jackie could not get away with anything at home.
This metaphor yields the following lessons:
We should be careful about how absolutist we become concerning privacy. We may be able to preserve the right to privacy with respect to strangers, corporations, and government entities. However, this may only be possible for those of us who agree to allow surveillance by friends. Moreover, friendly surveillance could include monitoring what we do on computers. Anyone who refuses to accept friendly surveillance might have to be subject to hostile surveillance.
Driving from the conference to the airport, I thought of an example that illustrates Joy's point: email spam. The vast majority of people believe that unsolicited email is wrong. Yet spam persists. If we cannot stop spam, then there is little reason to believe that we can stop other sorts of antisocial behavior just because the overwhelming majority of people wish to stop it.
(If you think that stopping spam is purely a matter of better email filtering, then you may be right, but you miss the point. In this context, assume that if a spammer is allowed to send out mass email, filters will be imperfect, and any failure will be catastrophic.)
I believe that to solve this problem, we have to take advantage of the following fact:
Networks of good people can become arbitrarily large, but networks of bad people will tend to break down. Therefore, people with large networks of character references can be trusted as good, and limitations can be placed on people without such networks.
When I give you a character reference, I can never be completely sure that you are good. However, I must have some information that leads me to believe you are good, and I must not have information that leads me to suspect that you are bad.
Suppose that everyone is either good or bad. A good person:
Now, suppose that we set up a class of email that has a "stamp." Whenever good people send email, they use "stamps." Whenever I accept an email with a "stamp," there is a probability of zero that it is spam. If this system takes hold, most ISP's will refuse to forward email without a stamp.
In order to put a stamp on email that I send, I must obtain and regularly renew an email license. To do so, I need a number of character references--call this number n. The number n might be 10 or 20. The person giving me a character reference is confirming that
There are two potential problems with this approach:
I call this the "sixdegrees" principle, based on something I observed at the web site sixdegrees.com. On that site, you designate people in your "first degree," who are immediate friends and associates. Your "second degree" is their immediate friends and associates. Your "third degree" is the friends and associates one step further removed, etc.
Sixdegrees.com lets you view a graph of your connections. Most members of sixdegrees are part of "the big cloud," meaning that they richly connect with one another. However, there are other members who are isolated.
If you are a good person, then your character references will tend to put you into the "big cloud." If you are a bad person, your conspiracy of character references will yield a graph that is more isolated and connected to the "big cloud" only tenuously, if at all. This will be a red flag that will make the conspiracy easy to detect.
The problem of Sheep's Clothing is a bit more of a challenge. My hypothesis is that as someone starts to slip into badness, that as they renew their license their will be a change in the pattern of character references. People will start to say "I've lost touch with this guy," and the person will have to ask for new character references. This will trigger a red flag.
The lessons from this metaphor are:
My favorite examples of governing bodies that work well are the Internet Engineering Task Forces (IETFs) and the World Chess Federation (WCF). I am not close to either of them, and perhaps the old adage applies: if you want to enjoy eating sausage, don't watch how it's made. However, from a distance, the IETFs appear to provide a "just-in-time" approach to setting standards to solve problems. From a distance, the WCF appears to maintain a rating system for players that has meaningful measures of skill at chess.
Now, imagine that everyone in the world is given an "ethics rating" that is analogous to a chess rating. Maybe 2500 would be the highest, and 0 would be the lowest. Your rating would affect how you could use various technologies. "Ethical grandmasters" would be allowed to do advanced research in biotechnology and robotics.
If there is some research that ought to be renounced, then "ethical grandmasters" naturally would have the wisdom to renounce it. Meanwhile, "ethical novices" would be given less research freedom than the "ethical grandmasters."
The research rules would be set by "ethical engineering task forces." These EETF's would be analogous to the IETF's that set standards for the Internet.
One of the EETF's would have to maintain the system for everyone's "ethics rating." There are a variety of systems that can be imagined. One approach would work like this:
At each point in time, you ask 10 people to give you an ethics rating, and your rating is the average of those 10 ratings. (If you have only five character references, then the other five are set at zero and then averaged in.) When I give you an ethics rating, I can give you a rating that is no higher than my own rating.
Under this system, in order to have a good rating, you have to be regarded as ethical by people who themselves are regarded as ethical. Thus, this scheme takes advantage of the "six-degrees" principle.
These ethical judgments are entirely subjective. I am not concerned with objectivity, fairness, or consistency. That would be a problem only if my choice of character references were imposed upon me--the way it is in a faculty tenure decision, for instance. Instead, I assume that I can ask anyone I choose to give me a character reference.
We need some initial character ratings to "seed" the system. I can imagine that you would get somewhat different outcomes for the collective conscience depending on how you seeded the system. But I would hope that there are many reasonable seeding systems that would yield good results.
For example, one approach would be to give a rating of 2500 to a few dozen people. If it were up to me, some of the people to whom I would assign this rating include:
The "ethics rating" system would expand slowly at first. Many people would find it difficult to obtain ten ratings from the first few dozen grandmasters. Some people would have to settle for relatively low ratings at first, and hope that they could improve over time.
We might deliberately keep the expansion slow until we see how things work. For example, you could slow the expansion by assigning a newly rated person a "provisional" status. If my rating is only provisional, then I am not allowed to rate anyone else. Eventually, my status would shift from "provisional" to "established," and I could rate other people.
Eventually, however, the process would start to gather momentum, as the number of people with "established" ethical ratings reaches critical mass. At that point, it will become easier to obtain a rating. Of course, one would hope that it would be difficult for a bad person to obtain a good rating.
If this essay has some recommendations that are surprising to the reader, then you are not alone. When I first began to consider the problem, I cannot say that I anticipated that the solution would take this shape. That may be a sign that I will change my mind fairly soon.