Too much discipline on the left, too little on the right

Check out my latest essay on Medium, Restoring Political Health, Left and Right.

People with a temperament that is high on openness and low on conscientiousness are inclined toward the left and tend to be curious, tolerant, and willing to explore the world with a commitment to intellectual honesty. People with a temperament that is low on openness and high on conscientiousness are inclined toward the right and tend to emphasize standards of decency, restraint, and good behavior. As shorthand, I will refer to these as the left and the right, respectively. We are plagued today by an authoritarian left and a badly-behaved right. Restoring health will require work on both sides.

The opening of the essay owes something to Jordan Peterson’s observations about psychology and politics (e.g., here or in his conversation with Jonathan Haidt), as well as to his use of metaphor. But the substance of the essay represents issues that have long concerned me. A dozen years ago, I had the insights that were the seeds of The Three Languages of Politics in an essay that I called Folk Beliefs Have Consequences.

The Paradox of Profits, parts 2 and 3

Part 2 talks about the necessity of the profit system.

In a modern, large-scale economy, coordination takes place through a combination of bosses and profits. Bosses order people to undertake particular tasks. Profits and losses provide incentives to engage in certain economic activities and to curtail others.

Part 3 talks about the risks of trying to “fix” outcomes of the profit system.

1. The profit system is partially self-correcting.
2. Attempts to impose corrections are not as successful as one might hope.
3. Rather than attempt to identify and correct market failures, it would be better to advocate policies that enhance the self-correcting mechanisms of the profit system. In particular, government interventions should be focused on enabling competition to overcome entrenched economic power.

The essays are now attracting readers. But my guess is that I am almost entirely preaching to the converted.

Equity without capital, twenty years later

I received a review copy of Capitalism without Capital: The Rise of the Intangible Economy, by Jonathan Haskel and Stian Westlake, which has a 2018 copyright date.

1. My first reaction is to be a bit miffed that my name is not in the index. Nick Schulz and I wrote a book on the intangible economy, and the first edition appeared in 2009. Going back even further, in 1998 I wrote an essay called Equity without Capital. That essay is still interesting to read, and it anticipated some of the central issues in their book. But probably fewer than 200 people saw it when I wrote it.

2. Hal Varian and Carl Shapiro aren’t in the index, either. That is a less forgivable omission. Information Rules sold well.

3. I hurried through the book, and I was inclined to give it a mixed review. But when I re-read it, I only re-read the sections that I liked the first time. I decided that those sections are really good. Now I am inclined to give the book a strong recommendation.

4. The organization of the book is excellent. The good news is that you usually can skip to the end of the chapter and read its conclusion to get the main point. The bad news is, well, why not just condense the book into an extended essay? And if you left out the sections of the book that did not do much for me, the extended essay would work even better.

Gosh, I am really being hard on them, for some reason. It really is a first-rate book. I’m not sure why I keep wanting to talk about what I don’t like about it. But, here I go again:

5. They make a big deal about recent literature that arrives at measures of intangible capital. However, as they point out, such measures are fraught.

Their analysis says that intangible capital has four s’s: sunk costs (investments in assets that cannot be re-sold); scale (network effects and path dependency can bring very high returns); synergies (combinations of ideas are worth more than the ideas are worth separately); and spillovers (ideas are easily copied or imitated).

This implies, as they recognize, that intangible capital can be worth much more than what it costs to obtain, because of scale and synergies. But it can be worth much less than what it costs to obtain, because of sunk costs in non-marketable assets. In bankruptcy, you can sell off the office furniture and the fleet of trucks (tangible assets), but not the business process that proved unsustainable or the failed attempt to establish a brand (sunk costs).

But the measures of intangible capital use acquisition cost as the measure of investment in intangible capital. That may be a reasonable way to value tangible capital. But to me, their four s’s imply that intangible capital’s value cannot be reliably represented by its acquisition cost.

To get technical, Tobin’s q is the ratio of the market value of capital to its replacement cost. Think of it as the ratio of the stock price of a firm to the acquisition cost of its assets. For tangible capital, q should be close to 1. But for firms with a lot of intangible capital, like The Four, it is much, much greater than 1. Tyler Cowen’s recent column, Investors are celebrating the tech revolution, says that the current high values of q are a positive signal about future economic growth.

Of course, for many dotcom stocks in the 1990s, q shot way up before dropping to zero, which is what my essay was predicting. But by the way, one of the stocks I was skeptical about back then was Amazon, and if you held onto that, the losses on the rest of your 90’s doctcom portfolio might not trouble you.

Looking at this balance between superstar value and failure, the authors propose that, well, on average, the value of intangible capital for the whole economy ought to be somewhere close to what it costs. I thought they were just hand-waving at that point.

They understand well enough that intangible capital is not exactly like tangible capital in the neoclassical model. But I do not think that they are ready as I am to take the next step and jettison the neoclassical framework.

Telepresence

I put up a short science fiction story on Medium about telepresence using augmented reality. You may wonder why.

1. Aaron Ross Powell put in a strong plug for Medium.

2. I remember when Robert Metcalfe once was asked what would be the Internet’s ultimate killer application. He replied, “telepresence.”

3. When I posted the other day about the shortfalls in technology relative to Ray Kurzweil’s predictions, it made me think of my own questionable prediction, Headsets, which I’m still longing for in a way. Think of this as an upated version of that essay. Sort of.

Stranger = Danger

James Bridle writes,

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.

Actually, trying to excerpt from the piece is almost pointless. I needed to go through the whole thing to really understand the issue, and my guess is you have to read the whole thing, too.

Toward the end, he writes,

It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on. This is not what many of us want.

My take is this:

1. The Internet has always been vulnerable to frustrating forms of abuse. Email spam is still with us, for crying out loud. The main culprit is anonymity. TCP/IP was not designed to have the network identify the sender. Instead, to the extent that identity is resolved, it is by the recipient. So spam filters are the best you can do against spam. Getting rid of spam altogether would require a more expensive protocol that puts identity checks somewhere in the network, and it seems that the cost of transmitting all the spam and filtering it out is less than the cost of putting identity checks into the network. At least, that is my understanding.

2. “On the Internet, no one knows your a dog” has always been both a bug and a feature. It is a lot easier to engage in abuse if you can remain anonymous than if you have to reveal your identity to get into the game. But if you forfeit anonymity, then you allow governments and corporations to engage in surveillance. You also increase the potential for censorship and stifling of opinion.

3. Facebook and LinkedIn created environments where you can share your identity rather than hide it, and that serves a real need.

4. In general, I can imagine a situation in which some regions of cyberspace allow anonymity and others do not. You will know that when you enter a region that allows anonymity that it will not be well policed. It will be like walking into a bad neighborhood at night.

5. It strikes me that the “fake news” problem on Facebook shows that it has been straddling both regions. Why is it easy to spam Facebook? My guess is that a lot of Facebook users would prefer a good neighborhood, so that either Facebook will get rid of anonymous content providers or lose out to a competitor who does. Same with Kids’ YouTube.

6. Some people are really against all forms of surveillance. They want the regions in cyberspace that allow anonymity to be almost ubiquitous. I think that is neither practical nor desirable. I lean more toward the David Brin approach to the problem of surveillance. But if the social norms and institutions support censorship and suppression, then the “transparent society” won’t work.

Complexity illustrated by the financial crisis

This IGM poll of leading economists on the importance of various factors in the financial crisis of 2008 provides interesting results. The poll lists 12 factors, and all of them receive at least some positive weight. In fact, this under-estimates the complexity of the causal mechanisms, because some of the factors are themselves multi-faceted. For example, the first factor, “flawed financial sector regulation and supervision,” could mean many different things to many different people. It could mean the repeal of Glass-Steagall (a favorite among non-economists on the left) or it could mean the Basel accords (one of my personal favorites).

Overall, I think it vindicates the broad, multi-causal approach that I took in Not What they Had in Mind.

Re-litigating Netscape vs. Microsoft

In WTF, Tim O’Reilly writes,

Netscape, built to commercialize the web browser, had decided to provide the source code to its browser as a free software project using the name Mozilla. Under competitive pressure from Microsoft, which had built a browser of its own and had given it away for free (but without source code) in order to “cut off Netscape’s air supply,” Netscape had no choice but to go back to the web’s free software roots.

This is such an attractive myth that it just won’t die. I have been complaining about it for many years now.

The reality is that Netscape just could not build reliable software. I know from bitter personal experience that their web servers, which were supposed to be the main revenue source for the company, did not work. And indeed Netscape never used its server to run its own web site. They never “ate their own dog food,” in tech parlance.

On the browser side, Netscape had a keen sense of what new features would enhance the Web as an interactive environment. They came up with “cookies,” so that when you visit a web site it can leave a trace of itself on your computer for later reference when you return. They came up with JavaScript, a much-maligned but ingenious tool for making web pages more powerful.

But Netscape’s feature-creation strategy backfired because they couldn’t write decent code. Things played out this way.

1. Netscape would introduce a feature in to the web browser.
2. An Internet standards committee would bless the feature, declaring it a standard.
3. Microsoft would make Internet Explorer standards-compliant, so that the feature would work.
4. The feature would fail to work on the Netscape browser.

In short, Netscape kept launching standards battles and Microsoft kept winning them, not by obstructing Netscape’s proposed standards but by implementing them. Netscape’s software development was too incompetent to write a browser that would comply with its own proposed standards.

I’m sure that if Netscape could have developed software corporately they would have done so. But because they could not manage software development internally, they just gave up and handed the browser project over to the open source community. And need I add that the most popular browser is not the open source Mozilla but the proprietary Chrome.

Here is one of my favorite old essays on the Microsoft-Netscape battle.

Re-litigating Open Source Software

In his new book, Tim O’Reilly reminisces fondly about the origins of “open source” software, which he dates to 1998. Well he might, for his publishing company made much of its fortune selling books about various open source languages.

In contrast, in April of 1999, I called open source The User Disenfranchisement Movement.

…The ultimate appeal of “open source” is not the ability to overthrow Microsoft. It is not to implement some socialist utopian ideal in which idealism replaces greed. The allure of the “open source” movement is the way that it dismisses that most irksome character, the ordinary user.

In that essay, I wrongly predicted that web servers would be taken over by proprietary software. But that is because I wrongly predicted that ordinary civilians would run web servers. Otherwise, that essay holds up. In the consumer market, you see Windows and MacOS, not Linux.

The way that open source developers are less accountable to end users is reminiscent of the way that non-profit organizations are less accountable to their clients. Take away the profit motive, and you reduce accountability to the people you are supposed to be serving.

Still, the business environment is conducive to firms trying to expose more of their software outside the firm. When a major business need is to exchange data with outside entities, you do not want your proprietary software to be a barrier to doing that.

A local college computer teacher, whose name I have forgotten (I briefly hired him as a consultant but fired him quickly because he was disruptive) used to make the outstanding point that the essential core of computer programming is parsing. There is a sense in which pretty much every chunk of computer code does the job of extracting the characters in a string and doing something with them.

Computer programs don’t work by magic. They work by parsing. In principle, you can reverse engineer any program without having to see the code. Just watch what it takes in and what it spits out. In fact, the code itself is often inscrutable to any person who did not recently work on it.

Ironically, some of the most inscrutable code of all is written in Perl, the open-source language that was a big hit in the late 1990s and was an O’Reilly fave. If you want to reverse-engineer someone else’s Perl script (or your own if it’s been more than a couple of years since you worked on it), examining the code just wastes your time.

There are just two types of protection for proprietary software. One is complexity. Some software, like Microsoft Windows or Apple IOS, is so complex that it would be crazy to try to reverse engineer it. The other form of protection is legal. You can file for a patent for your software and then sue anybody who comes up with something similar. Amazon famously tried to do that with its “click here to order” button.

In today’s world, with data exchange such a crucial business function, you do not want to hide all of your information systems.
You want to expose a big chunk of them to partners and consumers. The trick is to expose your software to other folks in ways that encourage them to enhance its value rather than steal its value. Over the past twenty years, the increase in the extent to which corporations use software that is not fully hidden is a reflection of the increase in data sharing in the business environment, not to some magical properties of open source software.

Why pick on sociology?

A reader asks,

Why do economists have such contempt for sociologists?

…I was thinking of this because of your posts on “normative sociology”

1. The term “normative sociology” comes from Robert Nozick, and he described it as the study of what the causes of problems ought to be. I use it as shorthand for ideologically biased social research, in any discipline.

2. Mainstream economists do have contempt for sociology. When Robert Solow wanted to write about the causes of sticky wages, he apologized for doing “amateur sociology.”

Mainstream economists see themselves as studying phenomena that are tangible and quantifiable. I define sociology as the study of informal authority, and informal authority is inherently intangible and less readily quantifiable. Where mainstream economists can go wrong is to dismiss phenomena that are intangible and less readily quantifiable as unimportant. I think that mainstream economists are less scornful of such phenomena now than they were when I was in graduate school, so on that score the contempt for sociologists probably has trended down.

My own concern with sociologists is with the preponderance of left-wing bias embedded in much research. But I have been predicting that economics will go down that same path.

More of my thoughts can be found at The Sociology of Sociologists and How Effective is Economic Theory?

A negative review of Thomas Leonard

In the important Journal of Economic Literature, Marshall I. Steinbaum and Bernard A. Weisberger write (gated, unless you are a member of the American Economics Association),

Motivated history is not good history. And the approach the book takes is particularly unlikely to yield fruitful insight: sweeping statements about what “the progressives” believed, festooned with cherry-picked quotes and out-of-context examples, without much of a hearing for either their opponents or for debate and disagreement among themselves. The result is a powerful brief arguing that the intellectual movement of that era has a decidedly problematic legacy on eugenics, racism, gender equality, immigration, and in countless other ways that would give pause to anyone looking to elevate their legacy. But all, or at least much, of that history was known—revealed decades ago

The book to which they refer is Illiberal Reformers, which I reviewed here.

In the paragraph above, opening sentences would lead one to believe that Leonard’s account is not accurate, but then the phrase “known–revealed decades ago” would lead one to believe that it is accurate.

I wish that the authors had listed some of the “cherry-picked quotes” and “out-of-context examples.” I finished the review without seeing any.

In my review, I wrote

Leonard also point[s] out that racism was not the exclusive province of Progressives. He notes the Anglo-Saxonism of Senator Henry Cabot Lodge and other conservatives

The authors of the JEL review claim instead that Leonard only singles out racists on the Progressive side.

I think my review better reflects the contents of the book. But as academic economics proceeds along its road toward left-wing sociology, it hardly surprises me to see the Journal of Economic Literature publish essays that are uncharitable to those on the right.