Price Stickiness is Only One Coordination Failure

Steven Randy Waldman writes,

For both firms and individuals, resistance to downward price adjustment is often rational, even when at a macroeconomic level universal downward adjustment would be desirable (perhaps because the central bank and/or state have failed to accommodate the expected path of nominal incomes, perhaps because nominal exchange rates are rigidly misaligned). If we could wave a magic wand and have wages, prices, and especially debts all simultaneously scale downward, that might be awesome. But, unfortunately, we can’t.

I think both Tyler Cowen and Mark Thoma pointed to this post. Read the whole thing.

The problem with the macroeconomic perspective is that when you think of the economy as a GDP factory, then the only reason you can think of for it not to operate at capacity is that the ratio of M to P is too low. Instead, if you think in terms of PSST, you can think of all sorts of reasons for coordination failure.

The chains of production are really long and complex. Somebody has a job doing “business development” for a company trying to make money out of an app. That job is so far from producing widgets that it is ridiculous.

In addition, pretty much everything we buy is discretionary. The seller of almost any product or service could wake up tomorrow and find the demand for that product or service poised to fall off. Need I cite landline telephones, retail music stores, or taxi drivers?

In the PSST story, the rigidities that matter are the burdens of trying to start a new business and the reluctance of people to relocate and to change occupations. The ratio of M to P just doesn’t amount to a hill of beans in an economy that depends on deep, complex coordination in the market process.

Kirzner vs. Samuelson

David Glasner writes,

In Kirzner’s view, the divergence between Mises and Hayek on the one hand and the neoclassical mainstream on the other was that Mises and Hayek went further in developing the subjectivist paradigm underlying the marginal-utility theory of value introduced by Jevons, Menger, and Walras in opposition to the physicalist, real-cost, theory of value inherited from Smith, Ricardo, Mill, and other economists of the classical school.

…as the neoclassical research program evolved, the subjective character of the underlying theory was increasingly de-emphasized, a de-emphasis that was probably driven by two factors: 1) the profoundly paradoxical nature of the idea that value determines cost, not the reverse, and b) the mathematicization of economics …The false impression was created that economics was an objective science like physics, and that economics should aim to create objective and deterministic scientific representations (models) of complex economic systems that could then yield quantitatively precise predictions, in the same way that physics produced models of planetary motion yielding quantitatively precise predictions.

neoclassical economists who developed this deterministic version of economic theory, a version wonderfully expounded in Samuelson Foundations of Economic Analysis

Pointer from Mark Thoma (!).Read the whole post, which refers to this lecture by Israel Kirzner. (Kirzner starts about 17 minutes in)

The Samuelson tradition keeps wanting to treat production technology as known and costs as objective. It is long on math and short on philosophy. For an exploration of subjective cost, see James Buchanan’s Cost and Choice. For an analysis of the ideological implications of subjective cost, see my essay.

Paul Romer on Physics and Information

He writes,

There is a crucial distinction between human capital (stored in neurons), and codified information (stored in some external form, such as printed text or bits on a hard drive.)

Anything stored in neurons is a rival good.

A person’s human capital is fully excludable as long as people have legal control over their own bodies. So there are no human capital “spillovers” and no human capital “externalities.”

As the cost of copying codified information goes to zero, it becomes a pure nonrival good.

Pointer from Mark Thoma. Read the whole thing. It relates to my bumper-sticker saying: Information wants to be free, but people need to get paid.

Housing Demand and Down Payment Requirements

Andreas Fuster and Basit Zafar write (note: WTP – “willingness to pay”),

we find that on average, WTP increases by about 15 percent when households can make a down payment as low as 5 percent of the purchase price instead of having to put down 20 percent. However, this average masks large differences in sensitivity across households. In fact, almost half the respondents do not change their WTP at all when the required down payment is lowered. On the other hand, many respondents increase their WTP very strongly in the second scenario with the lower down payment requirement. This is particularly true for respondents who are current renters (and often relatively less wealthy): their WTP on average increases by more than 40 percent. They also tend to choose lower down payment fractions than current owners; for instance, 59 percent of renters but only 36 percent of owners choose a down payment fraction of 10 percent or lower.

As Mark Thoma says, this is not a surprise. The question is whether this means that government policies to encourage lower down payments are a good idea. I think not, since it encourages a lot of speculative purchases of houses and makes house prices more volatile.

If you want periods in which people over-pay for housing to alternate with periods of retrenchment, then letting people buy with little or no money down is the way to go. If you want sensible policies to build wealth among households below the top of the income ladder, then you would subsidize saving. But that idea goes nowhere with the real estate lobby, which dictates policy in this area.

When Economists Were Right, Allegedly

Richard Baldwin writes,

Barry Eichengreen added specificity to this in January 2009 with his insightful column “Was the euro a mistake?”, noting: “What started as the Subprime Crisis in 2007 and morphed in the Global Credit Crisis in 2008 has become the Euro Crisis in 2009. Sober people are now contemplating whether a Eurozone member such as Greece might default on its debt.” He wrote that the alternative to default was “fiscal retrenchment, wage reductions, and assistance from the EU and the IMF for the cash-strapped government.”

He predicted – again dead on – that “[t]here will be demonstrations against the fiscal cuts and wage reductions. Politicians will lose support and governments will fall. The EU will resist providing financial assistance for its more troublesome members. But, ultimately, everyone will swallow hard and proceed … In the end, the EU will overcome its bailout aversion.” The farsightedness is astounding. In January 2009, few knew the Greeks had a problem serious enough to require debt restructuring.

Pointer from Brad DeLong, via Mark Thoma.

That sounds impressive. He also cites other economists. But a couple of cautionary notes.

1. The best way to develop a reputation as far-sighted is to make many vague, conditional predictions. Later, you call attention to those that sound correct, and if necessary you wiggle out of those that sound incorrect by pointing out the conditions or taking advantage of their vagueness. I am not accusing Eichengreen of doing this. I have others in mind. But what might Baldwin have found had he had searched through past articles and looked for bad predictions?

2. How best to generalize this point? My guess is that “Economists’ predictions should always be taken as gospel”

3. Is the correct lesson that we should pay attention when economists warn about sovereign debt issues? Consider that many of us have issued warnings about the United States.

Capital Indivisibility as a Theory of the Firm

Cameron Murray writes,

If it was the division of labour that leads to increased productivity, labour could just as easily be divided between firms. The fact that pin factories, even with only ten men, still performed all 18 tasks, instead of specialising in just 10 tasks, is clear evidence that there is something special and coordinated about the tasks themselves that arise from the particular capital investments. The tools and machines are designed to be compatible with each other, and if part of the process is done outside the firm, each of the two firms would inevitably be tied to the same compatible capital equipment, and would therefore find gains by merging into a single firm.

Pointer from Mark Thoma.

According to Alchian and Demsetz, Murray’s first sentence is false. If the value of labor is in their combined product, rather than in each individual task, someone must manage the production process and allocate payments to individual workers. Remember, in an Alchian-Demsetz firm, marginal product is not defined.

I agree with Murray that if two firms have complementary capital then there is an incentive to merge, at least if one or both firms does not have a lot of other choices for partners in production. But what Murray sees as the gains from merging firms that use compatible capital equipment also would arise from merging firms whose workers who undertake complementary tasks in a production process.

The Computer as Economic Metaphor

Cesar Hidalgo says,

So countries with a lot of trust and good institutions can create very complex computers that are able to process large volumes of information and create complex products that are rare and have a big premium on the market. So by thinking of economies in terms of information and computation, you can also connect institutions with the mix of products that countries make and with wealth. A social network is nothing other than a distributed computer.

Pointer from Mark Thoma. Read the whole interview. Perhaps he is one of those fellows who sounds deep and profound but is not really saying anything.

But I think that there is some significance in the availability of the computer and the Internet as a metaphor. In 1960, machines were the most salient sources of metaphors, and so economists thought in mechanistic terms. As we start to expand our use of computers and networks as metaphors, I think this affects how we view the economy. In some sense, the emphasis on institutions and other components of what Nick Schulz and I call the “software” of the economy are insights that are more likely to occur to economists living in the computer age.

More Essential Hayek

Again, the book will be released next week.

Another point Boudreaux makes is that in a specialized economy, our production activities are much narrower than our consumption activities. This makes rent-seeking more prevalent on the production side.

This point is easily missed. For example, Stephen G. Cecchetti and Kermit L. Schoenholtz write about the mortgage interest deduction as if its political strength comes entirely from home owners. (Pointer from Mark Thoma.) In fact, I would argue that it is the NAR, NAHB, and the MBA that make it inviolable.

We know that food stamps are popular with the farm lobby. And perhaps Medicaid does not benefit recipients as much as it does providers of medical services.

Paul Romer Issues a Clarification

He writes,

I wrote that the economists I criticize for using mathiness are engaged in a campaign of ACADEMIC politics, not one of national politics. Whatever was true in the past, the now fight is over ACADEMIC group identity.

Pointer from Mark Thoma. Read the whole thing. My remarks:

1. As I wrote in my earlier comment on Romer, I see monopolistic competition as prevalent. Perhaps the Chicago school would want to argue that even though in practice we do not see perfect competition, if you make predictions assuming perfect competition, you will typically be correct. But I do not want to speak for Chicago.

2. Romer seems to want to march under the banner of “science” in economics, and I am skeptical of that. Reader Adam Gurri pointed me to an entire book of essays that take such a skeptical position. I am not sure that the essays speak to me, but I am still pondering.

3. In my view, as the economics profession has grown stronger in math, it has grown weaker in epistemology. That is, the generations of economists that came after Samuelson and Solow lost the ability to ask “How do we know that?” They are content to re-use equations simply because they can be found in prominent publications, but (as Noah Smith has pointed out) not because they have been verified empirically, as they would be in physics or another hard science.

There is a slight overlap between Romer’s critique and mine. Romer is saying that economists are choosing models in order to maintain “group cohesion.” I say that they are choosing models based on appeals to authority.

What I wish to claim is that epistemology in economics is really difficult. It is more difficult than in physics. We have a much harder time testing our theories experimentally. We face insurmountable levels of causal density. We do not have a neat, clean answer to the question “How do you know that?” It appears to me that physicists can answer that question in ways that are much more straightforward and compelling. (I am thinking of physics at a high school level. Maybe at the research frontier physics also faces epistemological challenges.)

Because epistemology in economics is really difficult, I think that if you care about epistemology, you are going to find much published research in economics frustrating. That will be true for articles that avoid math as well for articles that use math.

Mathiness, Starting in 1937

Noah Smith writes,

Macroeconomic theory is chock full of mathiness. It’s not just Lucas and Prescott, it’s the whole scientific culture of the field.

I think you find this going all the way back to John Hicks’ famous 1937 paper, “Mr. Keynes and the Classics.”

The “i” in this model could be a short-term interest rate, or it could be a long-term interest rate. It could be a risk-free rate, or it could be a risky rate. It could be a nominal rate, or it could be a real rate.

And, as Smith points out once again, none of the equations in the IS-LM model, or any other mathematical macro model, has any demonstrated empirical validity. The equations are, at best, a way of organizing and expressing the economist’s opinions about macro.

My own opinion, as you know, is that thinking about the economy as if it were a single business (or as a single consumer who also runs a single business) is wrong-footed from the very start. Instead, I believe that it is in the shifting kaleidoscope of patterns of specialization and trade among multitudes of businesses that employment fluctuations take place.

It is fascinating to me that there are critics who will not buy the PSST story until they see it expressed using math. To me, that is as beside the point as arguing that it has no validity unless it can be told in Latin or Swahili or Yiddish.