McCloskey vs the Nobel Laureates

Deirdre McCloskey is no fan of the experimental methods of the latest winners of the Nobel Prize.

a good deal of the work of the Nobelists, is as startlingly unethical, and stupid, as the notorious Tuskegee syphilis experiment run from 1931 to as late as 1971. African-American men were randomly assigned to not get the penicillin that the medical scientists from the U.S. Department of Health already knew cured the disease.

Pointer from Don Boudreaux.

More broadly, she attacks the mindset of mainstream economists that their purpose in life is to come up with policies for government to impose on the public. She argues against interventionist policies both in principle and in terms of consequences. She argues that adults are entitled to make their own decisions unaffected by government policy; moreover, when they do enjoy liberty, the results are greater prosperity.

My thoughts:

1. If we take it as given that government is going to undertake policy, then it might be better informed by controlled experiments. For example, in education, over 99 percent of experiments are not controlled. The bureaucrats implement a new curriculum without testing whether it achieves desired objectives (assuming that those objectives are clearly articulated).

2. What should economists say about policy? I think that McCloskey would argue that our over-arching observations, about the effectiveness of markets and the public-choice problems inherent in government intervention, overpower the fantasy-despot analyses of market failure. We know that market processes better implement experimentation, evaluation, and evolution. She would say that experiments cannot teach us anything that takes us beyond that insight or that refutes it.

Essay backup: Paradox of Profits, Part 1

I’ve decided to back up essays I wrote for Medium here. My thoughts:

1. Medium is very poorly curated, and so what little worthwhile content that is on the site is invisible.

2. As a commenter pointed out a while ago, the Medium site could fail, which might cause my essays there to disappear. My guess is that Medium will survive at least through the 2020 election, but why take chances?

3. Scott Alexander has proven that long essays can work as blog posts.

Note that these essays are not well formatted. That is because I just did a copy-paste from medium and took the results. When I write new essays, as opposed to backups, I will just post them here and the format will be reasonable.

So here we go: Continue reading

Labor market elasticities

John Cochrane writes,

Thus, you can’t simultaneously be for higher minimum wages and for wage subsidies. That is cognitive dissonance. Or, inconsistency. Or wishful thinking. And very common.

He is commenting on a post from Tyler Cowen. My thoughts:

1. Greg Mankiw likes to point out that a higher minimum wage is like a combination of a subsidy to labor supply and a tax on labor demand. A wage subsidy does away with the tax on labor demand, which is why Mankiw prefers it. An occupational licensing fee strikes me as a tax on labor supply.

If labor demand is inelastic, that means that large changes in labor costs are accompanied by small changes in workers employed. If labor demand is elastic, that means that small changes in labor costs prompt large changes in labor demand.

If labor demand is inelastic, then a higher minimum wage will raise labor income. Think of employers just absorbing the cost (although that is only one possible reason for inelastic labor demand). But if labor demand is inelastic, then a wage subsidy will not raise worker income. Think of employers as just pocketing the subsidy (again, assuming that this is the reason by which labor demand is inelastic).

If labor demand is elastic (the more likely case, in my view), then a wage subsidy will work to increase labor income, while a higher minimum wage will fail.

2. Cochrane’s point is that economists sometimes argue for one policy that works with high elasticity of labor demand and another policy that works with low elasticity of labor demand, without apologizing for the inconsistency. An example that I have used is arguing for a higher minimum wage (which works with low elasticity of labor demand) and for more immigration (which will depress wages if there is low elasticity of labor demand). Tyler uses that example as well.

3. Tyler’s point is that if you think that labor demand is inelastic, then occupational licensing requirements, acting as a tax on labor supply, will not affect employment very much. Again, I am inclined to think that labor demand is elastic, so I think that occupational licensing requirements do adversely affect employment.

I think that on the immigration issue and the occupational licensing issue, economists of all stripes prefer to implicitly make the elastic-demand assumption. On the minimum-wage issue, economists on the left prefer to implicitly make the inelastic-demand assumption. On the immigration issue, some conservative economists prefer to implicitly make the inelastic-demand assumption. I think that libertarian economists tend to implicitly make the elastic-demand assumption in all cases. So at least we are consistent.

Graduate school in economics

Tyler Cowen writes,

Andy Abel wrote a problem with dynamic programming, which was Andy’s main research area at the time. Abhijit showed that the supposed correct answer was in fact wrong, that the equilibrium upon testing was degenerate, and he re-solved the problem correctly, finding some multiple equilibria if I recall correctly, all more than what Abel had seen and Abel wrote the problem. Abhijit got an A+ (Abel, to his credit, was not shy about reporting this).

I am older than Tyler, and I went to MIT rather than Harvard, but this anecdote perfectly captures the atmosphere in grad school as I remember it. Heavy math, mathematical ability the primary source of respect. It was a system designed to produce idiot savants. A few students from that period, including Banerjee, managed to do useful work in spite of their training, but I still seethe about many of my courses, which were nothing other than a form of fraternity hazing using math exercises.

On this year’s Nobel Prize in economics

It goes to Abhijit Banerjee, Esther Duflo and Michael Kremer for work on field experiments in the economics of (under-) development. Alex Tabarrok at Marginal Revolution has coverage, starting here.

I am currently drafting an essay suggesting Edward Leamer for the Nobel Prize. Last week, I wrote the following paragraph:

The significance of what Angrist and Pischke termed the “credibility revolution in empirical economics” can be seen in the John Bates Clark Medal awards given to researchers who participated in that revolution. Between 1995 and 2015, of the fourteen Clark Medal winners, by my estimate at least seven (Card, Levitt, Duflo, Finkelstein, Chetty, Gentzkow, and Fryer) are known for their empirical work using research designs intended to avoid the problems that Leamer highlighted with the multiple-regression approach.

This year’s Nobel, by including Duflo, would seem to serve to strengthen my case for Leamer.

Podcast on Preference Falsification

Eric Weinstein and Timur Kuran. It’s almost three hours, and I listened to the whole thing. I might listen to parts of it again, because there are lots of little pieces that were interesting.

One interesting piece was Kuran’s recollection of Donald Trump belittling John McCain by saying that being captured did not make McCain a war hero. Kuran’s point was that Trump was violating political norms and his willingness to do so increased his support. As I recall, Kuran used the metaphor of “guardrails” and said that Trump was willing to ignore them.

In the three-axes model, conservatives are very attached to guardrails. Human beings are dangerous drivers on the road of life, and guardrails like religion and traditional values are what keep us from smashing into telephone poles. But in Kuran’s analysis, Trump’s supporters were so fed up with having to pretend to go along with elites that they were happy to see someone who clearly did not care what the conservative establishment thought about him.

I am not happy with the term “preference falsification.” In standard economics, preferences refer to consumer choices, and we say that “choices reveal preferences.” But not many examples that the speakers give to illustrate preference falsification involve consumption. Instead, some of the examples in the podcast refer to signals. So in Turkey when secularism was in power, people signaled that they were secular even if they were religious. Now they have to do the opposite. Also, many examples refer to political beliefs or voting behavior.

I am afraid that if you are not more careful in defining preference falsification, you end up using it as an all-purpose boo-word. The podcast includes some discussion of the suppression of ideas in academia. I’m totally on board that idea suppression is an issue. I am less convinced that applying the term “preference falsification” provides additional insight.

Vector autoregression

A commenter asks,

I’m curious what your opinion on Christopher Sims’s econometric work is now.

Sims is another macro-econometrician who was awarded a Nobel Prize for work that I think is of no use.

The problem in macro is causal density–there is a high ratio of plausible causal mechanisms to data. If you have dozens of causal variables and only a relative handful of data points, what do you do?

The conventional approach was for the investigator to impose many constraints on the regression model. This is mathematically equivalent to adding rows of “data” that do not come from the real world but instead are constructed by the investigator to conform exactly to the investigator’s theoretical pet theories. The net result is that you learn what the investigator wanted the data to look like. But other investigators can–and do–produce very different empirical narratives for the same real-world observations.

Sims’ approach was for the investigator to narrow down the number of causal variables, so that the computer can produce a model without the investigator doctoring the data. But that is not a solution to the causal density problem. If there are many important causal variables in the real world, then in a non-experimental setting, restricting yourself to looking at a few variables at a time is pointless.