Tyler Cowen writes,
It remains the case that the most significant voluntary censorship issues occur every day in mainstream non-internet society, including what gets on TV, which books are promoted by major publishers, who can rent out the best physical venues, and what gets taught at Harvard or for that matter in high school. In all of these areas, universal intellectual service was never a relevant ideal to begin with
The original Internet architecture was “smart ends, dumb network.” The smart ends are the computers where people compose and read messages. The “dumb network” is the collection of lines and routers that transmits the bits.
Suppose you create a message, such as an email, a blog post, or a video. When your computer sends the message, it gets broken into packets. Each packet is very small. It has a little bit of content and an address telling where it is going. The Internet’s routers read the address on the packet and forward it along. In Ed Krol’s metaphor, the Internet routers and communication lines act like the Pony Express, relaying the packet to its final destination, without opening it up to see what is inside. The dumb network transmits these packets without knowing anything about what is in them. It does not know whether the packet is an entire very short email or a tiny part of a video.
When your computer receives a message, it consists of one or more packets–usually more than one. The computer opens up the packets and figures out how to put them together to form the message. It then presents you with the email, the blog post, the video, or what have you.
A connection between one end and the other end stays open only long enough to send and receive each packet. To transmit any given message, I may receive many packets from you, but those packets could come over different paths of the network, and thus each packet uses a different end-to-end connection. Think of end-to-end connections as being intermittent rather than persistent.
Some consequences of this “smart ends, dumb network” architecture:
1. The network cannot identify spam. It does not even know that a packet is part of an email message–if it did, spam could be deterred by charging email senders a few cents for each email unless the recipient waives the charge.
2. The network does not know when it is sending packets that will be re-assembled into offensive content. Otherwise, it would be easier to implement censorship.
3. The network does not know the identity of the sender of the packets or the priority attached to them. In that sense, it is inherently “neutral.” The network does not know the difference between a life-or-death message and a cat video.
I get the sense that this original architectural model may no longer describe the current Internet.
–When content is cached on the network or stored in the “cloud,” it feels as if the network is no longer ignorant about content.
–Many features, such as predictive typing in a Google search, are designed to mimic a persistent connection between one end and the other.
–When I use Gmail, a lot of the software processing is done by Google’s computers. That blurs the distinction between the network and the endpoints. Google is performing some of each function. Other major platforms, such as Facebook, also appear to blur this distinction.
The new Internet has advantages in terms of speed and convenience for users. But there are some potential choke points that did not exist with the original architecture.