The paradox of software development

I finished Tim O’Reilly’s WTF. For the most part, his discussion of the way that the evolution of technology affects the business environment is really insightful. This is particularly true around chapter 6, where he describes how companies try to manage the process of software development.

I like to say that computer programming is easy and software development is hard. One or two people can write a powerful set of programs. Getting a large group of people to collaborate on a complex system is a different and larger challenge.

It is like an economy. We know that the division of labor makes people more productive. We know that some of the division of labor comes from roundabout production, meaning producing a final output by using inputs that are themselves produced (also known as capital). Having more people involved in an economy increases the opportunities to take advantage of the division of labor and roundabout production. However, the more people are involved, the more challenging are the problems of coordination.

O’Reilly describes Amazon as being able to handle the coordination problem in software development by dividing a complex system into small teams. You might think, “Aha! That’s the solution, Duh!” But as he points out, dividing the work among different groups of programmers was the strategy used in building the original healthcare.gov, with famously disastrous results. You risk doing the equivalent of having one team start to build a bridge from the north bank of a river and another team start to build from the south bank, and because of a misunderstanding their structures fail to meet in the middle.

He suggests that Amazon avoids such pitfalls by using what I would call a “document first” strategy. The natural tendency in programming is to wait until the program is working to document it. You go back and insert comments in the code explaining why you did what you did. You give users tips and warnings.

With disciplined software development, you try to document things early in the process rather than late. Before you start coding, you undertake design. Before you design, you gather requirements. I’m oversimplifying, but you get the point.

As O’Reilly describes it, Amazon uses a super-disciplined process, which he calls the promise method. The final user documentation comes first. Each team’s user documentation represents a promise. I’ve sketched the idea in a couple of sentences, but O’Reilly goes into more detail and also references entire books on the promise method.

Why isn’t most software developed in a super-disciplined way? I think it is because software development reflects the organizational culture of a business, and most business cultures are just not that disciplined. They impose on their software developers a combination of unstable requirements and deadline pressure. In practice, the developers cannot solidify requirements early, because they cannot get users to articulate exactly what they want in the first place.

Also, requirements change based on what people experience, and it takes discipline to decide how to handle these discoveries. What must you implement before you release, and what can you put off for the next version?

Consider three methods of software development. All of these have something to be said for them.

1. Document first–specify exactly what each component of the system promises to do.
2. Rapid prototyping–keep coming up with new versions, and learn from each version
3. Start simple–get a bare-bones system working, then move on to add in the more sophisticated features.

If you do (1) without (3), you end up with healthcare.gov. If you do (1) without (2) your process is not agile enough. You stay stuck with the first version that you designed, before you found out the real requirements. If you do (2) and (3) without (1), you get to a point where implementing a minor change requires assembling 50 people to meet regularly for six months in order to unravel the hidden dependencies across different components.

From O’Reilly, I get the sense that Amazon has figured out to do all three together. That seems like a difficult trick, and it left me curious to know more about how it’s done.

Re-litigating Netscape vs. Microsoft

In WTF, Tim O’Reilly writes,

Netscape, built to commercialize the web browser, had decided to provide the source code to its browser as a free software project using the name Mozilla. Under competitive pressure from Microsoft, which had built a browser of its own and had given it away for free (but without source code) in order to “cut off Netscape’s air supply,” Netscape had no choice but to go back to the web’s free software roots.

This is such an attractive myth that it just won’t die. I have been complaining about it for many years now.

The reality is that Netscape just could not build reliable software. I know from bitter personal experience that their web servers, which were supposed to be the main revenue source for the company, did not work. And indeed Netscape never used its server to run its own web site. They never “ate their own dog food,” in tech parlance.

On the browser side, Netscape had a keen sense of what new features would enhance the Web as an interactive environment. They came up with “cookies,” so that when you visit a web site it can leave a trace of itself on your computer for later reference when you return. They came up with JavaScript, a much-maligned but ingenious tool for making web pages more powerful.

But Netscape’s feature-creation strategy backfired because they couldn’t write decent code. Things played out this way.

1. Netscape would introduce a feature in to the web browser.
2. An Internet standards committee would bless the feature, declaring it a standard.
3. Microsoft would make Internet Explorer standards-compliant, so that the feature would work.
4. The feature would fail to work on the Netscape browser.

In short, Netscape kept launching standards battles and Microsoft kept winning them, not by obstructing Netscape’s proposed standards but by implementing them. Netscape’s software development was too incompetent to write a browser that would comply with its own proposed standards.

I’m sure that if Netscape could have developed software corporately they would have done so. But because they could not manage software development internally, they just gave up and handed the browser project over to the open source community. And need I add that the most popular browser is not the open source Mozilla but the proprietary Chrome.

Here is one of my favorite old essays on the Microsoft-Netscape battle.

Re-litigating Open Source Software

In his new book, Tim O’Reilly reminisces fondly about the origins of “open source” software, which he dates to 1998. Well he might, for his publishing company made much of its fortune selling books about various open source languages.

In contrast, in April of 1999, I called open source The User Disenfranchisement Movement.

…The ultimate appeal of “open source” is not the ability to overthrow Microsoft. It is not to implement some socialist utopian ideal in which idealism replaces greed. The allure of the “open source” movement is the way that it dismisses that most irksome character, the ordinary user.

In that essay, I wrongly predicted that web servers would be taken over by proprietary software. But that is because I wrongly predicted that ordinary civilians would run web servers. Otherwise, that essay holds up. In the consumer market, you see Windows and MacOS, not Linux.

The way that open source developers are less accountable to end users is reminiscent of the way that non-profit organizations are less accountable to their clients. Take away the profit motive, and you reduce accountability to the people you are supposed to be serving.

Still, the business environment is conducive to firms trying to expose more of their software outside the firm. When a major business need is to exchange data with outside entities, you do not want your proprietary software to be a barrier to doing that.

A local college computer teacher, whose name I have forgotten (I briefly hired him as a consultant but fired him quickly because he was disruptive) used to make the outstanding point that the essential core of computer programming is parsing. There is a sense in which pretty much every chunk of computer code does the job of extracting the characters in a string and doing something with them.

Computer programs don’t work by magic. They work by parsing. In principle, you can reverse engineer any program without having to see the code. Just watch what it takes in and what it spits out. In fact, the code itself is often inscrutable to any person who did not recently work on it.

Ironically, some of the most inscrutable code of all is written in Perl, the open-source language that was a big hit in the late 1990s and was an O’Reilly fave. If you want to reverse-engineer someone else’s Perl script (or your own if it’s been more than a couple of years since you worked on it), examining the code just wastes your time.

There are just two types of protection for proprietary software. One is complexity. Some software, like Microsoft Windows or Apple IOS, is so complex that it would be crazy to try to reverse engineer it. The other form of protection is legal. You can file for a patent for your software and then sue anybody who comes up with something similar. Amazon famously tried to do that with its “click here to order” button.

In today’s world, with data exchange such a crucial business function, you do not want to hide all of your information systems.
You want to expose a big chunk of them to partners and consumers. The trick is to expose your software to other folks in ways that encourage them to enhance its value rather than steal its value. Over the past twenty years, the increase in the extent to which corporations use software that is not fully hidden is a reflection of the increase in data sharing in the business environment, not to some magical properties of open source software.

Suggestions for Facebook

On the one hand, Ben Thompson writes,

Facebook should increase requirements for authenticity from all advertisers, at least those that spend significant amounts of money or place a large number of ads. I do believe it is important to make it easy for small companies to come online as advertisers, so perhaps documentation could be required for a $1,000+ ad buy, or a cumulative $5,0000, or after 10 ads (these are just guesses; Facebook should have a much clearer idea what levels will increase the hassle for bad actors yet make the platform accessible to small businesses). This will make it more difficult for bad actors in elections of all kinds, or those pushing scummy advertising generally.

On the other hand, John Tamny writes,

Facebook is a free service. Robinson’s decision to sign up for what is free in no way entitles her to knowledge about and control of the advertisements sold by the free service. If she feels as though “shadowy foreign interests” buying ads on the social network somehow altered her policy views, then she should quit Facebook altogether. No one charged her to set up a Facebook page, no one forced her to, so if she’s bothered by an income stream that enables the site’s free-of-charge feature, she’s obviously free to close her account.

Those who want to regulate Facebook are not afraid of how they use it themselves. They are afraid of how others use it. This is a classic case of Fear Of Other’s Liberty. FOOL is the root of nearly all regulation.

Tamny is telling FOOLs to use exit rather than voice. When you have a valuable entertainment franchise that relies on its reputation, exit can have devastating effects–just ask the NFL. If Facebook implements new policies, I hope it is because those policies help to ward off exit, and that they are not necessary to ward off regulation.

What I’m Reading

Tim O’Reilly’s new book. He tries to grasp how technology affects the current business environment. He then proceeds to look at the overall economic and social implications. You can get some of the flavor of it by listening to his interview with Russ Roberts. And here is more O’Reilly, where he says,

Microsoft lost leadership because they had taken away the opportunities for their developer ecosystems, so those developers went over to the Internet and to Google. Now, we see this same thing playing out again.

I am not persuaded by these sentences. The Internet was quite a powerful phenomenon. I cannot envision an alternative history in which Microsoft does not lose a lot of its commanding position because of the Internet. You can make a case that Bill Gates could have positioned Microsoft better had he grasped the significance of the Internet sooner, but that would not have changed the game, only made Microsoft a more agile player. And you could argue that whatever Microsoft lost in terms of time, they made up for in terms of spending, so that they wound up doing about as well in the Internet environment as one could reasonably expect.

Overall, I disagree with O’Reilly quite a bit. Early in the book, he writes,

there are far too many companies that are simply using technology to cut costs and boost their stock price

Take this rhetoric and apply it to trade, and it could come from the lips of Donald Trump. In fact, good economists will explain that trade and technology are so intertwined as to be indistinguishable as economic phenomena. Austrian capital theory says that capital is roundabout production, i.e., roundabout trade. Suppose an economy consists of farm equipment and crops, and you want to explain its efficiency. Do you give the credit to farmers applying technology or do you give the credit to trade between the manufacturing sector and the agricultural sector? It’s the same phenomenon, just described differently.

Russ Roberts did not go after O’Reilly on the anti-corporate demagoguery. A charitable interpretation was that Russ wanted to focus on the Internet “platform model” that O’Reilly waxes eloquently about. A less charitable interpretation is that Russ switched to Tyler Cowen’s philosophy of interviewing.

Aaron Ross Powell on the X

He emails

gotta say, I think you’re completely misreading Apple’s motives for releasing the iPhone X, and so misreading their strategy for the device.

The iPhone X is out now, instead of seeing similar features roll out in a year or two in the “regular” iPhone, because (a) Apple wanted to release something special for the 10th anniversary and (b) they wanted to put out a device that was a look ahead in terms of features, while marketed as premium, in part because they are supply constrained on components (in particular, the OLED screens). Apple has said it expects it’ll be well into 2018 before they can meet demand for the iPhone X. By giving it a premium price, they’re reducing demand, while still netting a profit, and they’re less likely to run into the bad press of the phone taking months to ship to new customers. Tim Cook cares, above all else, for customer satisfaction. (Which is why he mentions their “customer sat” numbers in nearly every keynote.)

There are also plenty of perfectly rational reasons to purchase an iPhone X. It has the same size screen as an 8 Plus, while having a form factor closer to a regular 8. That’s a big difference. It has a considerably better screen—and a considerably better screen than any Android phone on the market. It sports a better camera, with optical image stabilization on both lenses, instead of just one like on the iPhone 8 Plus. It has FaceID. To a lot of people, these are likely worth a $300 premium. (I’d also predict that Apple’s profit on the iPhone 8 is about the same as the iPhone X, or even slightly higher, given the increased cost of the cutting edge components in the X. At any rate, it’s highly unlikely Apple is “price gouging.”)

As to old devices, the “Apple is forcing us to upgrade” refrain is common, but there’s never been any evidence to support it. iOS 11, released today, supports devices all the way back to the iPhone 5s, released in 2013. And Apple will never pull something like “You can only get your backups if you buy a faster phone.” You might need a new phone to run the latest OS, of course, but they won’t turn off iCloud access in prior OS versions. It’s not how they roll—and they’ve certainly never pulled anything like that in the past.

I probably should not have taken such an uncharitable view of Apple’s motives. It’s the motives of Apple’s consumers that I find suspect.

If your current phone is working, then buying the X means that you think that at the margin a better screen and a better camera are worth a thousand bucks. Is that the best use one can make of a thousand dollars?

If Aaron is right that Apple is facing supply challenges, then the price should come down when the supply situation improves. To me, that makes buying the X now even less rational.

The iPhone X

Ben Thompson writes,

The iPhone X sells to two of the markets I identified above:

  • Customers who want the best possible phone
  • Customers who want the prestige of owning the highest-status phone on the market
  • Note that both of these markets are relatively price-insensitive; to that end, $999 (or, more realistically, $1149 for the 256 GB model), isn’t really an obstacle. For the latter market, it’s arguably a positive.

    Thompson is giving irrational reasons for people to buy this phone, and maybe those will be sufficient. But no review that I have seen has made a use case for it, and for many people $1000 is real money. If consumers behaved rationally, then the iPhone X would join New Coke in the annals of product rollouts.

    It seems to me that Apple is not going to convince Android users to switch to this new phone. So basically their goal is to gouge their existing customers when they need to replace their phones. They figure that people are afraid of losing important information if they switch from iPhone to Android, so they are picking a price point for the 8 that they think their existing users will suck up and pay. And they are hoping that these replacers will say to themselves, “Shucks, as long as we’re paying that much money, why not throw in a few hundred bucks more and get the X?” We’ll see.

    If my hypothesis about gouging existing customers is correct, then one would predict that Apple will deliberately deprecate old phones in order to coerce users into upgrading. You can expect to see “we no longer support. . .” whenever they think they can get away with it. Upgrade your iPhone 7 to the latest version of IOS? No can do. Data backup? Sorry, does not work on old phones any more. etc.

    Price Discrimination Explains Everything

    Alex Tabarrok writes,

    How could Tesla increase the mileage at the flick of a switch? The answer is that owners of the Tesla 60kWh version of its Model S and Model X actually have the same battery as the 75kWh vehicles but the battery has been purposely limited or “damaged” to provide only 60KWh of mileage. But why would Tesla damage its own vehicles?

    The answer to the second question is price discrimination! Tesla knows that some of its customers are willing to pay more for a Tesla than others. .. Tesla must find some characteristic of buyers that is correlated with high willingness-to-pay and charge more to customers with that characteristic.

    He cites Deneckere and McAfee on damaged goods as price discrimination. I think that Varian and Shapiro would prefer to just call it “versioning,” and of course their classic Information Rules is mostly about price discrimination in a world with low variable costs. And if you think that price discrimination is a new phenomenon in the auto industry, I’ve got an early 1960s Pontiac to sell you.

    Variable costs approach zero

    Jan De Loecker and Jan Eeckhout write,

    We document the evolution of markups based on firm-level data for the US economy since 1950. Initially, markups are stable, even slightly decreasing. In 1980, average markups start to rise from 18% above marginal cost to 67% now. There is no strong pattern across industries, though markups tend to be higher, across all sectors of the economy, in smaller firms and most of the increase is due to an increase within industry. We do see a notable change in the distribution of markups with the increase exclusively due to a sharp increase in high markup firms

    Tyler Cowen brought up the paper in order to criticize it. Greg Ip covered the controversy.

    Variable costs are costs that increase as the business produces more output. They include costs of materials and the labor cost that is involved in direct production. My explanation for the two-Jans result is that variable costs are tending toward zero in many industries. (I think that this is also Tyler’s explanation, but I prefer to use more specific examples and less technical jargon.)

    My notes on the topic.

    1. Fifteen years ago, I noticed the trend toward declining variable costs. I wrote an essay called asymptotically free goods, “where research and development costs are high, but the marginal cost of the final product or service is low.” Think of a pharmaceutical that is expensive to develop but cheap to manufacture. Think of cell phone service providers, where the marginal cost of transmitting another gigabyte of data is close to zero. Think of a hospital, where most of the cost is overhead (if the amount of medical services that a hospital were to supply on a given day declined by 1 percent, the amount by which its actual costs would decline is close to zero). Think of an Internet service, such as Facebook, with high costs of development and maintaining a data center but with extremely low cost of adding another user.

    The point of the essay is that under marginal cost pricing, these would be free goods. If variable cost approaches zero, then markup over variable cost approaches 100 percent. [update: a commenter points out that this statement was in error. The ratio of price to variable cost approaches infinity as variable cost approaches zero.] (In the case of Facebook, the marginal cost of serving an ad is close to zero, and the markup that it charges advertises therefore approaches 100 percent).

    2. When I taught economics in high school, I would say that “price discrimination explains everything.” That is because most businesses do not operate in the textbook world of perfect competition. Instead, firms are focused on recovering fixed costs. To do so, they apply different markups to slightly different versions of products, trying to recover more fixed costs from the less price-sensitive buyers. That is why movie theaters charge so much for popcorn, why airlines have different classes of seats, why cable TV providers offer bundles, and so on.

    3. In manufacturing, the share of production workers is declining, but the share of non-production workers is increasing. Overall, we are producing more output with fewer workers on the assembly line (and I would guess that materials costs also are lower).

    4. My guess is that, if anything, the two-Jan’s paper understates the trend toward high markups. That is because my guess is that most corporate data allocates more labor to variable cost than really belongs there. Garett Jones pointed out that these days most workers do not produce widgets. Instead, they produce organizational capital. Garett Jones workers are part of overhead, not variable cost.

    5. In textbook economics, the term “monopoly power” is pretty much by definition the ability to charge a price above marginal cost. By that definition, it is very hard to think of real-world businesses that do not have monopoly power. If you want to say that the textbook model of perfect competition is baloney sandwich, I would have to agree with you.

    6. But lack of perfect competition does not mean that government regulators know better.

    7. Lack of perfect competition does not mean that there is no market discipline. There is still competitive discipline, but a lot of it comes in the form of creative destruction rather than in the form of prices being driven down to marginal costs by copycat entry.

    8. Government intervention can easily take the form of trying to stop creative destruction. For example, demand that autonomous vehicles be accident-free, rather than merely less dangerous on average than human-driven cars.