Uninformative Regression

Pierre Lemieux writes,

simple regression analysis confirms the absence of statistical correlation between country size and economic freedom.

Simple regression analysis is not a good choice with skewed data, such as the population size of different countries. What the regression algorithm does in this case is just compare the freedom index values of China and India with the average of all other countries. That is not very informative.

If you want to see a more careful analysis, which does show that smaller countries tend to be better run, see the essay that I wrote on The recipe for good government.

What Drives the Result?

Douglas L. Campbell writes,

the Glick and Rose estimation strategy implicitly assumes that the end of the cold war had no impact on trade between the East and the West. Several of the Euro countries today, such as the former East Germany, were previously part of the Warsaw Pact. Any increase in trade between Eastern and Western European countries following the end of the cold war would clearly bias the Glick and Rose (2017) results, which naively compare the entire pre-1999 trade history with trade after the introduction of the Euro.

Pointer from Mark Thoma.

Campbell is criticizing a paper that found that the creation of the Euro increased trade by 50 percent. He is suggesting that the result could have been driven by the fall in the Berlin wall, which was not caused by the advent of the Euro. The question “what drives the result?” is one that you should ask about any empirical paper. Sadly, it is rarely disclosed honestly by authors, who may not even be aware of the answer.

Russ Roberts on Economic Methods

He writes,

fundamentals like income or even changes in income over time are somewhat measurable with some precision, [but] we are notoriously unreliable at the things the world really cares about and asks of our profession: why did income for this or that group go up by this or that amount? What will happen if this or that policy changes? Should the subsidy to college education be increased or decreased and if so, by how much? These much-demanded answers for precision and an understanding of the complex forces that shape the world around us are precisely the questions we are not very good at answering.

It is a long essay, difficult to excerpt. Pointer from Tyler Cowen.

I am writing an even longer essay along similar lines. Currently, the hope is to have it come out this summer.

In physics, you have what is known as “effective theory.” That is, you have a theory, like Newton’s laws, which works very well for certain problems. Moreover, physicists can tell you where it works and where it does not work.

The problem in economics is that we have speculative interpretations which we try to pretend are effective theories. For reasons that Russ Roberts goes into, we cannot get rid of conflicting speculative interpretations.

The Three Iron Laws, Illustrated

Tyler Cowen writes,

I find few people are willing to embrace the more consistent statistical preference plus agnosticism, rather they play the game of “statistical noise for thee but not for me.”

He is writing about what is now a seemingly ancient question about the stock market’s reaction to the ups and downs of Mr. Trump. I want to say that this issue illustrates Merle Kling’s three laws of social science.

1. Sometimes it’s this way and sometimes it’s that way. In this case, sometimes one can find an affect of a change in Mr. Trump’s prospects and the market, and sometimes one cannot.

2. The data are insufficient. In this case, there is not enough data to make a definitive judgment.

3. The methodology is flawed. In this case, one can argue that the analyst is basing a conclusion on statistical noise.

Maybe the quote from Tyler suggests a fourth iron law: if the issue is emotionally salient, given that the first three iron laws hold, motivated reasoning and confirmation bias take over.

Andrew Gelman on the Replication Crisis

He writes,

2011: Joseph Simmons, Leif Nelson, and Uri Simonsohn publish a paper, “False-positive psychology,” in Psychological Science introducing the useful term “researcher degrees of freedom.” Later they come up with the term p-hacking, and Eric Loken and I speak of the garden of forking paths to describe the processes by which researcher degrees of freedom are employed to attain statistical significance. The paper by Simmons et al. is also notable in its punning title, not just questioning the claims of the subfield of positive psychology but also mocking it.

Pointer from Alex Tabarrok.

I am pretty sure that at some point prior to 2011 when I criticizes macro-econometrics I said that the degrees of freedom belong to the researcher rather than to the data. That is a minor note.

More important, I think that John Ioannidis deserves a mention. Yes, Gelman is focused on research in the field of psychology and Ioannidis focused primarily on epidemiology, but his paper Why Most Published Research Findings are False strikes me as a milestone worth including in the timeline.

Gelman’s post is mostly about the tension between insiders and outsiders in the academic world. The insiders’ chief weapon is the peer-reviewed journal article. The outsiders’ chief weapon is the blog post. If, like me, your heart is with the outsiders, you will find Gelman’s post bracing.

I should note that in my high school statistics class last year, I had an autodidact student who, among other things, was very familiar with the term p-hacking and the related literature. This gives me hope that as the generations turn over in academia, things might improve. As Max Planck is said to have remarked, science advances one funeral at a time.

Testing for Housing Discrimination

Commenting on an article by Sun Jung Oh and John Yinger, Timothy Taylor writes,

Overall, the findings from the 2012 study find ongoing discrimination against blacks in rental and sales markets for housing. For Hispanics, there appears to be discrimination in rental markets, but not in sales markets…

However, the extent of housing discrimination in 2012 has diminished from previous national-level studies.

What was most interesting to me was the method of testing for discrimination, which involved sending pairs of auditors of different races with otherwise identical characteristics to ask real estate agents for help finding apartments or homes. It would be interesting to see such a method applied to mortgage lending, rather than trying to make inferences from observed data.

The Case for Sticking with the Null Hypothesis

Jesse Singal writes,

As things continue to unfold, there will be at least some correlation between which areas of research get hit the hardest by replication issues and which areas of research offer the most optimistic accounts of human nature, potential, and malleability.

Pointer from Tyler Cowen.

Studies that show significant effects of educational interventions are right in this wheelhouse. That is why until they are scaled, replicated, and shown to have durable effects, you should view accounts of such studies with skepticism.

Prison and Mental Illness

Scott Alexander writes,

What about that graph? It’s very suggestive. You see a sudden drop in the number of people in state mental hospitals. Then you see a corresponding sudden rise in the number of people in prison. It looks like there’s some sort of Law Of Conservation Of Institutionalization. Coincidence?

Yes. Absolutely. It is 100% a coincidence. Studies show that the majority of people let out of institutions during the deinstitutionalization process were not violent and that the rate of violent crime committed by the mentally ill did not change with deinstitutionalization. Even if we take the “15% of inmates are severely mentally ill” factoid at face value, that would mean that the severely mentally ill could explain at most 15%-ish of the big jump in prison population in the 1980s. The big jump in prison population in the 1980s was caused by the drug war and by people Getting Tough On Crime. Stop dragging the mentally ill into this.

Another case of “this one chart” not being a compelling argument. Read the whole post. He is not buying the view that de-institutionalization of the mentally ill caused the prison population to rise.

On Climate Science

Phillip W. Magness writes,

In a strange way, modern climatology shares much in common with the approach of 1950s Keynesian macroeconomics. It usually starts with a number of sweeping assumptions about the relation between atmospheric carbon and temperature, and presumes to isolate them to specific forms of human activity. It then purports to “predict” the effects of those assumptions with extraordinarily great precision across many decades or even centuries into the future. It even has its own valves to turn and levers to pull – restrict carbon emissions by X%, and the average temperature will supposedly go down by Y degrees. Tax gasoline by X dollar amount, watch sea level rise dissipate by Y centimeters, and so forth. And yet as a testable predictor, its models almost consistently overestimate warming in absurdly alarmist directions and its results claim implausible precision for highly isolated events taking place many decades in the future. These faults also seem to plague the climate models even as we may still accept that some level of warming is occurring.

Pointer from Don Boudreaux. Read the whole thing. I have this same instinct about climate models, which does not necessarily mean that I am correct in my skepticism.