Why Measure GDP?

EconoSpeak writes,

The questions we need to ask are: What do we really want to know and why? What purposes were we pursuing when we sought to measure economic activity? Is measuring GDP helping to achieve those purposes? Are those purposes still our priorities? If not, what should be? What different institutions might we invent to achieve our purposes as we NOW understand them?

Pointer fromMark Thoma, whose column stimulated the post quoted above.

Some possible reasons to measure GDP:

1. To provide an indicator of the economy’s capacity to produce the goods needed to win a war (including necessary consumer goods as well as arms).

2. To provide a measure of the economy’s ability to provide for consumer welfare.

3. To compare productivity across countries and over time.

4. To indicate the extent to which an economy is in a recession.

5. To measure economic activity at market prices.

I think that (1) would have been most useful around the time of World War II, when the outcome was very much affected by this sort of productive capacity. It probably is less useful today.

I think that (2) is a very interesting measure. But (a) why not just focus on goods and services consumed? (b) you need to think a lot harder about how to measure consumers’ surplus (c) you have to think a lot harder about how to measure the consumption services from durable goods, particularly housing (d) you need to think a lot harder about what Thoma refers to as “bads,” like pollution.

I think that (3) is useful, but stop pretending that you can be accurate to at least two significant figures. When someone says that productivity growth changed from X over a five-year period to Y over the subsequent five-year period, their view of the signal-to-noise ratio in the data is much more optimistic than mine.

I think that (4) relies too much on the AS-AD framework, to which I do not subscribe.

I think that (5) is useful, but our current approach is wrong. Most government services are not sold at market prices, and so I would exclude them from this sort of measure.

The Quotable Roger Scruton

In Frauds, Fools, and Firebrands, he writes about those who condemn the commoditification of labor,

are we not tired, by now, of this tautologous condemnation of the free economy, which defines that which can be purchased as a thing and then says that the man who sells his labour, in becoming a thing, ceases to be a person? At any rate, we should recognize that, of all the mendacious defences offered for slavery, this is by far the most pernicious. For what is unpurchased labour, if not the labour of a slave?

1. I am reminded of Milton Friedman’s famous retort to a general defending the draft. The general asks, “Would you want to lead an army of mercenaries? Friedman replies, “Would you rather lead an army of slaves?

2. I am reminded of the widespread requirement of high school students to complete hours of “community service” in order to graduate.

Scruton says to the left: Condemn paid labor all you like. It is more voluntary than the alternative.

Separately, on the philosophy of science, Scruton writes,

Philosophers of science are familiar with the thesis of Quine and Duhem, that any theory, suitably revised, can be made consistent with any data, and any data rejected in the interest of theory.

That is certainly my view of macroeconomic theory.

The Academic Ecosystem

About a year ago, Aaron Clauset, Samuel Arbesman, and Daniel B. Larremore wrote

A strong core-periphery pattern has profound implications for the free exchange of ideas. Research interests, collaboration networks, and academic norms are often cemented during doctoral training. Thus, the centralized and highly connected positions of higher-prestige institutions enable substantial influence, via doctoral placement, over the research agendas, research communities, and departmental norms throughout a discipline . The close proximity of the core to the entire network implies that ideas originating in the high-prestige core, regardless of their merit, spread more easily throughout the discipline, whereas ideas originating from low-prestige institutions must filter through many more intermediaries. Reinforcing the association of centrality and insularity with higher prestige, we observe that 68 to 88% of faculty at the top 15% of units received their doctorate from within this group, and only 4 to 7% received their doctorate from below the top 25% of units.

The top graduate schools in any field form a tight club, into which it is nearly impossible to break in. This allows disciplines to easily be captured by fads, because the voices that would reveal the emperor’s nakedness are so completely marginalized.

In economics, I have argued that this phenomenon produced the spread of really silly approaches to macroeconomics, which is still a problem in the profession. This topic will be explored more in what I am calling The Book of Arnold.

What was Chicago Economics?

Bueller? Peter J. Boettke and Rosolino A. Candela write,

Chicago price theory in the Friedman/Stigler/Becker generation was not defined by the comparative analysis of the institutional conditions within which the constant adjustments and adaptations by economic actors to changing conditions produces a tendency towards equilibrium, as it had been under the Knight/Simons/Viner generation. Instead, price theory in the hands of Friedman/Stigler/Becker became an exercise in defining the optimality conditions given any situation within which human actors find themselves.

From the conclusion:

The Chicago “Tight Prior Equilibrium” imposes a logical discipline on the world of human affairs, but it does not invite an inquiry into the diversity of institutions that arise to ameliorate our human imperfections and potentially turn situations of conflict into opportunities for social cooperation. As a result, the “fresh water” economics of Chicago still leaves us thirsty, and the “saltwater” economics of MIT/Harvard cannot serve to quench our thirst, so we must look to those alternative streams of thought for satisfaction in our quest to understand the dynamics of the market process.

In short, as I once put it,

Chicago economics: Markets work, so use markets

Harvard-MIT economics: Markets fail, so use government

Masonomist: Markets fail, so use markets

For Masonomist, the authors substitute the ABC’s. They write,

Outside of the University of Chicago, a “neglected” branch of Chicago price theory emerged among economists Armen Alchian, James Buchanan, and Ronald Coase, who provided an understanding of the market economy not by assuming the conditions [of] equilibrium, but by focusing their analysis on the dynamic adjustments required in the presence of market failures. By drawing attention to institutional solutions and the role of entrepreneurial action in discovering such solutions, they illustrated how market processes ameliorate social conflict and open up the possibility of realizing the gains from productive specialization and peaceful cooperation through voluntary exchange. It is this argument we contend that fulfills Simons’ plea for academic economics, and proves to be a better prophylactic against popular fallacies.

Not So Renewable?

Timothy Taylor writes,

annual global production of lithium has more than doubled from from about 16,000 metric tons in 2004 to over 36,000 metric tons by 2014. Even with this rise in quantity produced, the price of a metric ton of lithium carbonate has risen from $5,180 in 2011 to $6,600 in 2014.

He cites a report from Goldman Sachs on emerging themes, one of which is “Lithium is the new gasoline.” (The other claims in the report are also provocative.)

Changing our energy technology does not automatically eliminate scarcity. It is instead a form of substitution.

Debate is not about Debate

Robin Hanson writes,

in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

Or cherry-pick the weakest contrary argument. Or make up straw-man positions for the other side.

Significance Comparisons and Measurement Error

Leilan Shu and Sara Dada report,

We first use a simple linear regression model of average test score and average household income to first establish a positively correlated relationship. This relationship is further analyzed by differentiating for other community-based factors (race, household type, and educational attainment level) in three multiple variable regression models. For comparison and to evaluate any consistencies these variables may have, the regressions were run on data from both 2007 and 2014. In both cases, the final multiple regressions found that average household income was not statistically significant in impacting the average test scores of the counties studied, while household type and educational attainment level were statistically significant.

Pointer from Tyler Cowen. If this were credible, it would seem to suggest that “schooling inequality” is really ability inequality.

BUT…Whenever somebody says that “X1 does better than X2 at predicting Y,” watch out for the impact of measurement error. A variable that is measured with less error will drive out a variable that is measured with more error.

In this case, suppose that the variable that matters is “parents’ resources.” Income could measure that variable. Educational attainment could predict that variable. Income has many sources of measurement error–if nothing else, one year’s income could be high or low due to volatility. Educational attainment has fewer sources of measurement error. So even if parents’ resources is the true cause of children’s test scores, you could wind up with a zero coefficient on income, particularly if you include another regressor with lower measurement error.

And this is one of many reasons to prefer experimental data to regressions.

Chris Blattman on Experiments

In a must-read post, he describes a number of methodological problems with the interpretation of experiments in social science, but says

There’s no problem here if you think that a large number of slightly biased studies are worse than a smaller number of unbiased and more precise studies. But I’m not sure that’s true. My bet is that it’s false. Meanwhile, the momentum of technical advance is pushing us in the direction of fewer studies.

For me, the crux of the issue is this remark from Blattman.

It’s only a slight exaggeration to say that one randomized trial on the shores of Lake Victoria in Kenya led some of the best development economists to argue we need to deworm the world. I make the same mistake all the time.

The way I would put it is that there is no such thing as a study that is so methodologically pure that by itself it can serve as a reliable guide to policy. As I wrote in What Else Would be True?, the results of any study need to be thought about in the context of other knowledge.

Often, one encounters studies with conflicting results. You tend to focus on the methodological flaws only of the studies with results that you do not like. But remember Merle Kling’s third iron law of social science: the methodology is flawed. That law applies to every study, including experiments.

Great Minds and Hive Minds

Scott Alexander on Garett Jones’ book:

Hive Mind‘s “central paradox” is why IQ has very little predictive power among individuals, but very high predictive power among nations. Jones’ answer is [long complicated theory of social cooperation]. Why not just “signal-to-noise ratio gets higher as sample size increases”?

Me:

Can we rule out statistical artifact? Put it this way. Suppose we chose 1000 people at random. Then we create 50 groups of them. Group 1 has the 20 lowest IQ scores. Group 2 had the next 20 lowest IQ scores, etc. Then we run a regression of group average income on group average IQ for this sample of 50 groups. My prediction is that the correlation would be much higher than you would get if you just took the original sample of 1000 and did a correlation of IQ and income. I think that this is because grouped data will filter out noise well. Perhaps the stronger correlation among national averages is just a result of using (crudely) grouped data.

Piketty, Inequality, and Terrorism

The WaPo reports,

The new argument, which Piketty spelled out recently in the French newspaper Le Monde, is this: Inequality is a major driver of Middle Eastern terrorism, including the Islamic State attacks on Paris earlier this month — and Western nations have themselves largely to blame for that inequality.

To say that his views have not been well received might be an understatement. An essay in Quartz says,

But empirical studies suggest that poverty and inequality aren’t behind terror attacks. In the wake of the 9/11 attacks, Alan Krueger, the Princeton economist and future Obama administration official, examined databases of terror attacks to identify trends among the participants. Surprisingly, he found most were well-educated and not poor.

Even if terrorists are not poor, inequality still might cause terrorism. A terrorist could be a rich person who is jealous of people who are even richer.

Still, Piketty is not my type of thinker. As translated by Google, Piketty’s essay begins

It is obvious that terrorism feeds on the Middle Eastern powder keg of inequality

When I make highly speculative statements, I start out by saying “my guess is that,” not “it is obvious that.” Piketty’s approach may work better with some personality types.

When pressed, he seems to back off of strong claims and concede points to others. But I think that economists should be trained not to make such claims in the first place.