The Early American Economic Association

Bernard A. Weisberger and Marshall I. Steinberg quote from a draft of the founding document of the American Economic Association

We regard the state as an educational and ethical agency whose positive aid is an indispensable condition of human progress. While we recognize the necessity of individual initiative in industrial life, we hold that the doctrine of laissez-faire is unsafe in politics and unsound in morals; and that it suggests an inadequate explanation of the relations between the state and the citizens.

We do not accept the final statements which characterized the political economy of a past generation. . . . We hold that the conflict of labor and capital has brought to the front a vast number of social problems whose solution is impossible without the united efforts of Church, state, and science.

They note that in the final version, the phrase “the doctrine of laissez-faire is unsafe in politics and unsound in morals” was removed. Their narrative (interesting throughout) laments the decline of left-wing radicalism in the AEA since the early days.

Thomas C. Leonard’s Illiberal Reformers covers the same ground, from a much less sympathetic perspective. About that book, the authors write,

Leonard argues that these views are central to, and thus taint, his [AEA founder Richard Ely] scholarship and those of his academic disciples. But Ely’s views were by no means unique among many intellectuals of his time, including those, like Godkin, who were his staunchest opponents, and those views certainly don’t invalidate the rest of his thinking. Above all, he considered himself an advocate for workers against exploitation by economic elites. Certainly, the set of people for whom he advocated is much narrower than would be the case among self-described progressives today. In those days, just as now, the interests of workers in developed countries were often pitted against those even more unfortunate than themselves, and consequently, many workers and their advocates supported what were in effect restrictions on labor supply in order to reduce their competition. The chief target of Ely’s scholarship and advocacy was the hegemony of free-market economics, against which he offered a vision of contending interests vying for shares of the pie, a contest in which bargaining power determined all.

I am only a little way into Leonard’s book, and I may only skim it. He uses the phrase “reform as vocation” to describe Ely’s cohort. That is, they professionalized the role of the progressive policy advocate. The economist became the scientific expert, based in academia, properly credentialed, who would be called on by political leaders for advice to better engineer the economic system.

From Weisberger and Steinberg’s point of view, the reformers are genuine, while their opponents are merely tools of the existing order. They certainly would fail an ideological Turing test. My guess is that Leonard would fail also, although not nearly as badly.

Prison and Mental Illness

Scott Alexander writes,

What about that graph? It’s very suggestive. You see a sudden drop in the number of people in state mental hospitals. Then you see a corresponding sudden rise in the number of people in prison. It looks like there’s some sort of Law Of Conservation Of Institutionalization. Coincidence?

Yes. Absolutely. It is 100% a coincidence. Studies show that the majority of people let out of institutions during the deinstitutionalization process were not violent and that the rate of violent crime committed by the mentally ill did not change with deinstitutionalization. Even if we take the “15% of inmates are severely mentally ill” factoid at face value, that would mean that the severely mentally ill could explain at most 15%-ish of the big jump in prison population in the 1980s. The big jump in prison population in the 1980s was caused by the drug war and by people Getting Tough On Crime. Stop dragging the mentally ill into this.

Another case of “this one chart” not being a compelling argument. Read the whole post. He is not buying the view that de-institutionalization of the mentally ill caused the prison population to rise.

Measurement Problems

Scott Sumner writes,

economists don’t even know that they don’t know what inflation is. They talk as if it’s some sort of objective fact, like the height of Mt. Everest, which we ascertain with ever more accurate measurements.

I agree, and by the same token, we do not know the rate of productivity growth. The aggregation problems involved in trying to characterize the economy as a GDP factory are just too daunting.

On Climate Science

Phillip W. Magness writes,

In a strange way, modern climatology shares much in common with the approach of 1950s Keynesian macroeconomics. It usually starts with a number of sweeping assumptions about the relation between atmospheric carbon and temperature, and presumes to isolate them to specific forms of human activity. It then purports to “predict” the effects of those assumptions with extraordinarily great precision across many decades or even centuries into the future. It even has its own valves to turn and levers to pull – restrict carbon emissions by X%, and the average temperature will supposedly go down by Y degrees. Tax gasoline by X dollar amount, watch sea level rise dissipate by Y centimeters, and so forth. And yet as a testable predictor, its models almost consistently overestimate warming in absurdly alarmist directions and its results claim implausible precision for highly isolated events taking place many decades in the future. These faults also seem to plague the climate models even as we may still accept that some level of warming is occurring.

Pointer from Don Boudreaux. Read the whole thing. I have this same instinct about climate models, which does not necessarily mean that I am correct in my skepticism.

The Contrarian Indicator

In August 2008, Olivier Blanchard came out with a working paper saying that “The state of macro is good.”

His upbeat view of the stock market was posted on February 1st. In less than two weeks since then, the S&P is down about 5 percent.

I can forgive him for having poor timing on the stock market. My own view is that the stock market is not rational, and in fact this very irrationality makes the market difficult to beat.

But I find his smugness about MIT macroeconomics much more difficult to swallow. He would now say that there were problems with the consensus of a decade ago, but he still champions the MIT mystique.

The Big Short on Outsider Personalities

This weekend I watched The Big Short. The movie makes a big deal, as does the book, about the odd personalities of the investors who saw the financial crisis coming more clearly than others. Some thoughts on that:

1. If the typical normal person (or normal investor or normal regulator) saw a financial crisis coming, then it would not occur.

2. At any one time, there are lots of outsiders forecasting extreme events. If you bet on outsiders all the time, most of the time you will lose.

3. The challenge for insiders is to filter out the noise from outsiders without filtering out the signal.

4. You filter out signal when you hold as sacred hypotheses beliefs that really should be questioned. As the movie points out, the hypothesis that AAA-rated securities are safe was sacred. The hypothesis that house prices never go down in more than a few locations at the same time was sacred. The hypothesis that new risk management techniques made old-fashioned mortgage underwriting standards obsolete was sacred.

5. People with outsider personalities are less likely to fall into the trap of holding hypotheses as sacred. If you don’t need to get along with the insiders, then you question them. You question them when they are right and you question them when they turn out to be wrong.

6. As you know, I think that MIT economics has produced a set of insiders who hold sacred hypotheses. Math equals rigor. AS-AD. Market failure always justifies government intervention. Etc. The Book of Arnold is an attempt to call them out on it.

Decentralized Data Collection

Virginia Postrel writes,

Premise reverses the usual do-gooder assumption about the Internet’s benefits for people in developing countries — that it supplies precious information from abroad. (People in Pakistan can take online courses from MIT!) Instead, it turns those ubiquitous phones into a way of bypassing distant bureaucrats to get systematic information, collected by people who understand the local territory, out of the shadows and into the world economy.

Read the whole thing, which describes an app that allows businesses and governments to undertake research about price trends and other economic phenomena in developing countries.

The Research Climate, So to Speak

Judith Curry writes,

Careerism leads a scientist not to want to have their research be challenged or audited, for fear of damage to their reputation that is shallowly based on such things as publication numbers, funding, memberships on prestigious boards, press releases and citation numbers (rather than an interest in learning and making meaningful contributions that advance science).

Policy advocates/activists do not want to see their science challenged (or the science of their political allies), for fear that the challenge will diminish their policy and political objectives. Challenges from someone on the ‘other side’ of the policy/political debate are regarded as especially objectionable, since their motives are ‘different’. As a result, we are seeing an epidemic of ‘activism that abuses science as a weapon.’

Read the whole post. I was not sure what made the best excerpt.

I see this issue in Martin Gurri terms, with insiders and outsiders in conflict. The insiders are the credentialed academics. The outsiders non-academics or academics from other fields. We can expect outsiders to enjoy greater access to information and more ability to publicize their analyses than was the case before the Internet. The insiders will react by attacking the outsiders’ motives and lack of credentials. If we side too much with the outsiders, we risk nihilism, in which good science is too easily dismissed. If we side too much with the insiders, we risk groupthink, in which bad ideas persist because contrary analysis is suppressed.

Timothy Taylor and Russ Roberts

Self-recommending.

Taylor says,

it just seems to me that often when people talk about growth, the first thing they talk about is not the role of the private sector or firms. They talk about how the government can give us growth, through tax cuts or spending increases or the Federal Reserve. When they talk about fairness and justice, they don’t talk about the government doing that. They talk about how companies ought to provide fairness and justice in wages and health care and benefits and all sorts of things. So it seems to me that our social conversation about those things is topsy turvy.

Machine Learning and Holdback Samples

Susan Athey writes,

One common feature of many ML methods is that they use cross-validation to select model complexity; that is, they repeatedly estimate a model on part of the data and then test it on another part, and they find the “complexity penalty term” that fits the data best in terms of mean-squared error of the prediction (the squared difference between the model prediction and the actual outcome).

Pointer from Tyler Cowen.

In the early 1980s, Ed Leamer caused quite a ruckus when he pointed out that nearly all econometricians at that time engaged in specification searches. The statistical validity of multiple regression is based on the assumption that you confront the data only once. Instead, economists would try dozens of specifications until they found one that satisfied their desires for high R-squared and support for their prior beliefs. Because the same data has been re-used over and over, there is a good chance that the process of specification searching leads to spurious relationships.

One possible check on the Leamer problem is to use a holdback sample. That is, you take some observations out of your sample while you do your specification searches on the rest of the data. Then when you are done searching and have your preferred specification, you try it out on the holdback sample. If it still works, then you are fine. If the preferred specification falls apart on the holdback sample, then it indicates that your specification searching produced a spurious relationship.

Machine learning sounds a bit like trying this process over and over again until you get a good fit with the holdback sample. If the holdback sample is a fixed set of data, then this would again lead you to find spurious relationships. Instead, if you randomly select a different holdback sample each time you try a new specification, then I think that the process might be more robust.

I don’t know how it is done in practice.