James Pethokoukis quotes (but does not link to) what he calls a Citi report, “The Future of Technology and Employment,” as saying,
The upcoming digital age may cause more upheaval than previous technological revolutions as it is happening faster than before and is fundamentally changing the way we live and work.
The classical economic theory is that because we have unlimited wants, better technology will not eliminate jobs. The economy just needs to develop new patterns of specialization and trade.
If technology changes slowly, then there is plenty of time for entrepreneurs to come up with new types of work and for workers to adapt to the new needs in the workplace. However, Moore’s Law produces much faster change than what we saw during the Industrial Revolution. And it’s not as if the social dislocation of the Industrial Revolution was anything to sneeze at.
I would argue that what we are observing today and are likely to observe in the foreseeable future is much more influenced by the short-run dislocation than by the long-run equilibrium. And the long-run equilibrium may involve a lot of modification of human characteristics, using genetic engineering and computerized implants.
Ummm, you know that Moore’s Law is substantially less relevant now than it was during the 80s and 90s, right? From the perspective of single core computing, Moore’s law has been dead for 10 years.
I think he is using a loose definition of “Moore’s law” that basically just means processing power instead of transistor density. And that version is quite relevant.
I don’t really agree. Using multiple cores is a good deal more challenging than using a single core. Things that can scale horizontally (i.e., just add more core/boxes), sure, those have been getting better over the past decade. But it helps substantially less if there isn’t a great way to chop the work up onto multiple cores.
I think he is using the term in a way even looser than triclops mentioned. Information technology openned up a huge orchad of low-hanging fruit that have barely started to pick. The quick advances driving Moores law were such fruit. The new kinds of internet/web buisnesses are another. Others ones which we seem to be about to pluck are (physical) robots, wearables/mobile computers, prosthetic computers.
All these technologies will develop unusually fast because a key component of them (computers) have gone from being inconceivable to easy. Of course the same was true of steam power in the industrial revolution.
Totally disagree that technological revolutions are happening faster now or are more disruptive. Telegraphs erupted in the 1840s and changed the world from 10 mph communications to 669,000,000 mph communications in just a few years. Include steamships and railroads, and the 30 years from 1830 to 1860 saw immense changes that dwarf anything that’s happened to anyone now living (not to mention surgical anesthesia and the reaper. . ..)
Probably 1880 or so to 1910 or 20 is another disruptive period far greater than what we are now experiencing, with the widespread adoption of electricity and electric lights, electric motors, automobiles, telephones, movies, radio, airplanes.
So I think Dr. Kling’s assertion that Moore’s law produces much faster change than what we saw in the Industrial Revolution is wrong. The change that he and I have seen in our lifetimes is nothing compared to the revolutions in technology that swept rapidly through the western world in the nineteenth century (and for that matter in the early 20th century).
Sure, but what is the lower bound to the equilibrium salary of those jobs? The upper bound is “what a machine capable of doing as good of a job would cost”, which tends to shrink with improving technology. With a few specialized machines, the humans they replace just move to the different jobs that are created by increasing wealth, but as the machine options become more diverse and more capable, the job options machines can’t fill may be harder to find.
When technology makes something vastly cheaper, everybody cheers except for the people who were foolish enough to hold much of their wealth undiversified in the newly-cheap commodity. From an aggregate utilitarian standpoint, that’s not so bad when the commodity is aluminum and a few magnates get hurt, it’s worse when the commodity is farm production and a lot of farm owners get hurt, and it’s terrible when the commodity is human labor and everybody who isn’t independently wealthy gets hurt. At least people losing their farms still had human capital to fall back on, and the value of that human capital kept increasing as technology augmented it. If technology instead starts substituting for it, what else is left? Is there a modern analogy to “You can’t compete with factory farming, but you can still work in a factory”? I don’t think “You can’t compete with robotic workers, but you can still program robots” is going to work for as large a fraction of the population, at least not until/unless that genetic engineering and implants and other transhuman magic offers some assistance.
Whenever I hear talk of technology veering away from the available skill pool, I have to wonder why this should be any more likely than, say, having technology develop in a manner inconsistent with the availability of other resources. Of course it is true that human capital is at least capable of changing in response to technological change; but precisely to the extent that such adaptation is difficult, technology will itself tend to take the fact into account–just as it takes into account the relative scarcity of inputs of all kinds. Commentators ought to keep in mind that, outside of universities and government, where the usual economic motivations are less important, technology is after all no less an economical phenomenon than a technical one. So far as I’m aware, every innovation that caught on during the Industrial Revolution did so because it was a better response to prevailing scarcities than what existed before, and never despite the fact. I see no reason why this shouldn’t remain the case today.
I have been think for a while that it is the deltas that matter in economics. There is nothing in economic theory that says machines cannot at times eliminate jobs faster than new jobs are added. Having said that that makes for bright future and it will most like slow down at some point and go the other way where jobs are added quicker that they can be eliminated.
Computer ability to replace human work is not a simple function of processor speed or transistor count. Suppose that every mobile device had the power of today’s fastest supercomputer — so what? How would that displace human jobs? What algorithms do we have where we can see the clear potential to replace human work but for the fact that they just don’t run fast enough on today’s hardware? I just don’t know of any.
Arnold should read:
http://www.ribbonfarm.com/2013/07/10/you-are-not-an-artisan/
Particularly the section called “Machines as Children, Humans as Intestinal Fauna”. The point is that Moore’s law notwithstanding, machines are really not improving rapidly at ‘processing the arbitrariness of the world’:
————————-
First, machines are like children. The opposite of the overlord personification we’ve been encouraged to adopt by science fiction.
Like parents, we have to let them have the fun while we child-proof the environment (sanitize their inputs) and clean up after them (do whatever they are too clumsy to do and clean up any messes they create). They may not (yet) crave status or social identities, but they certainly look for easy-to-learn high-flow tasks: algorithmically scalable work (which is a sort of aspie-sexy choice of work if you think about it).
Second, humans are like the intestinal fauna in the body of technology. I don’t recall where I first heard this analogy, but it isn’t original to me.
The data refining example illustrates the second analogy: without intestinal fauna, humans and other animals could not digest many nutrients. Even dish-washing is hard for machines, so we have to pre-clean to some extent before loading dishwashers.
Without humans inhabiting their guts, technological systems cannot process much of the arbitrariness of the world (Amazon’s Mechanical Turk illustrates this most dramatically at scale).