on Diamond’s book, Openness to Creative Destruction.
One point of interest is that Diamond offers a contrarian take on the common story that the Internet came from government and DARPA. He argues that DARPA’s vision was largely to connect mainframe computers at research institutions. The full personal computer revolution and network-of-networks owes more to Bob Taylor, who quit DARPA in frustration to go to Xerox PARC. I am not necessarily ready to give up the conventional story, but I recommend listening. This segment is somewhere in the final third of the podcast.
To explore this point further, I went back and re-read Where Wizards Stay Up Late, a history of the Internet published in 1996 by Katie Hafner and Matthew Lyon. The book centers on the development of the first router by Bolt, Beranek and Newman, using a Honeywell computer. It was the size of a refrigerator and weighed 900 pounds. The book becomes uneven after that. Some of the sections are quite interesting, but others cover events and controversies that are long-forgotten, and justifiably so.
In the end, I am not persuaded by Diamond’s take on the Internet. ARPA (the predecessor of DARPA) really was at the heart of developing the long-distance computer network. Once it was up and running, it was transferred to a different defense department agency, which stopped innovating. But then the National Science Foundation started developing a research network, and that evolved into the Internet as we know it, with TCP/IP as created by Vint Cerf and Bob Kahn. A lot of research and development took place outside of government, but overall I think that government sponsorship deserves the bulk of the credit.
Diamond also has an interesting take on the way that the requirement for clinical trials in medicine constrains and distorts the sort of research that is undertaken.
Tymnet and Telenet were two packet switching companies around long before, and they provided network services. These two private companies used a circuit set up technique, then sent data on the established path. Arpanet did run time routing.
The entire Tcp/Ip protocol is defined in a ten page document. Not much research involved. It came about because of another increment in Moore’s Law allowed run time routing.
The comments for the EconTalk podcast with Arthur Diamond point out that Diamond was wrong about Robert Taylor’s involvement with the development of The Internet at Xerox Parc and Diamond acknowledges his mistake.
Regardless, I don’t think anyone that talks about “The Internet” is talking strictly about the TCP/IP protocol; they are talking about the World Wide Web (HTTP/HTML) which runs on top of TCP/IP.
I really think the internet is an open source success story, not a government one.
If TCP/IP hadn’t been open source after it was written, we’d have a network of many incompatible protocols. If Bell Labs hadn’t open sourced Unix (sort of) and included TCP/IP in it, we’d never have had all the apps like email. And if TCP/IP hadn’t been free, Microsoft would have developed their own thing, instead of just grabbing WinSock.dll and all the Unix apps that had been ported to it.
Much later, of course, you get Linux, Apache, the first Firefox browsers, Javascript, etc. Take away all the open source code and you don’t have much of an internet left.
Technical standards of all kinds have been influential regardless if they are backed by government players or private consortiums. Once any standard gains traction in the marketplace it the organizational structure of its founding matters little.
Academic institutions and other government backed organizations had a very large impact on TCP/IP, Unix, and RISC. Microsoft had a Unix license (Xenix) and their own non-routable network protocol (NetBIOS) used in Microsoft/IBM Lan Manager. Eventually Microsoft implemented both a reverse-engineered implementation of IPX (Novell Netware) and TCP/IP and included them in the OS. If Microsoft attempted to promote a non-standard routable protocol as an alternative to TCP/IP it would have failed.
I don’t think that anyone that closely followed the explosion of The Web as Netscape took it mainstream in 1995 thought that Open Source was a fundamental factor. Open Source certainly took off afterwards but I think you may be putting the cart before the horse.
By 1995 when the web was being implemented, the browsers, the web servers and the OS were all open source projects. So I don’t think you can say the web was anything other than an open source success story.
As for the internet protocols, I remember lots of discussion in the 80s about how CompuServe (later MSN and AOL) and other closed garden services were the future, and that Unix/TCP was this doomed university thing with no single standard, and IP legal issues due to Bell Labs trying to recapture Unix.
Linux and WinSock (both open source projects) revived Unix as a contender and laid the groundwork for what came later. Do you really think the time and money DARPA put into the early net is more important than Linux to the eventual shape of the net?
In 1995, Netscape’s browser was not Open Source. Microsoft’s Internet Explorer quickly displaced it. Netscape later Open Sourced Navigator/Communicator as Mozilla but an Open Source browser did not gain mainstream traction until Mozilla Firefox sometime in the early-to-mid 2000s. The First Browser War was not an Open Source phenomena.
In the early days/years after 1995, Sun Microsystems was booming selling hardware used for web servers. Netscape, Microsoft, and the major Enterprise software players were selling proprietary web server software.
The Open Source Apache HTTPd server was important (but not dominant) right from 1995 but the LAMP stack took time to gain traction starting with inexpensive WebHosting providers and slowly moving into the mainstream. Look at the launch date of the main LAMP stack successes, Wikipedia, WordPress, Magento, etc. and you will get a gist of the adoption timeline.
I’m not saying Open Source was not important, it was, but so was the government/academic contribution to TCP/IP, Unix, etc., so were the proprietary HTML/HTTP software products, so were the major Enterprise Unix vendors. There was a timeline, all the pieces fed off one another in a virtuous cycle, and Open Source is now the dominant model but it was not the prime mover, in my opinion.
The rise of the internet is a success story of open standards. The role of the US government was crucial not only in sponsoring the initial development, but also in using its purchasing power to mandate the use of these standards as to avoid becoming dependent on proprietary standards.
When Microsoft and Novell were slugging it out for small server domination, the default network protocol in these environments was IPX (and token ring networks, oh joy). With regard to open source software I’d go as far as to say it was widespread internet availability enabling collaboration by loosely coupled organizations and individuals that made it a success story (probably add the commoditization of PC hardware as the other essential factor).
I read the comments, it was Moore’s Law.
Just before the TCP/IP, the microprocessor was infant, 4 and 8 bits, 1 or 2 Mhz. The mini computers were using piece built, microcoded processor boards, much faster, but much more expensive.
X25 was big as was Telenet and Tymnet because they had simple routers, written in assembly and the network map was in the mainframe. So, the mainframe set up circuits in the routers, creating a map from a connection ID to an physical output port. The routers could be simple switches, done in assembly and the throughput delay was minimal. Memory at the time was very small unless you got a great big heavy magnetic memory thingy.
Then processors became 16 bit, memory expanded to megabytes, the the idea of having each router keep a copy of the map became possible. TCP was moved down, but there was always an ad hoc transport layer, verifying message completion. The idea of a hierarchical address system was already established, and used in the mainframe network maps. Then, as mentioned, the initiator took a couple of trips to UCLA and Stanford, and hashed out the details of the self routing protocol, all ten pages of it. Then the universities adapted it internally for file transfer and e mail, all invented in the universities.
Something else was dring the issue, I call it Morses Law. Data rates double about every five years since Sam tippy tapped. The phone company opened up after the monopoly suit, T-1 lines were available, higher speed, 1.5 mbps rather than the dial up speeds of 16 kbps. The companies were grabbing T1 lines to connect to the mainframes, and so did the universities. That was a bi driver for TCP/IP.
I should point out that IBM and the other main framers, now gone, knew that multiplexing was coming to the mainframe software, no long in a separate machine at the data center. IBM had or was developing its own addressing protocol for enterprise network because customers wanted any to any connections within the enterprise, and hardware was becoming cheap, data faster.
The universities needed that same thing, university wide node identification and mapping, the IP hierarchy was the obvious choice. But the university kids likely just implemented a simple IP network, on the spot, during discussions, and started testing and verifying the ten page protocol.