Garrett C. Christensen and Edward Miguel have a new paper ($). They conclude
There are many potential avenues for promoting the adoption of new and arguably preferable practices, such as the data sharing, disclosure and pre-registration approaches described at length in this article. One issue that this article does not directly address is how to most effectively – and rapidly – shift professional norms and practices within the economics research community. Shifts in graduate training curricula, journal standards (such as the Transparency and Openness Promotion Guidelines), and research funder policies might also contribute to the faster adoption of new practices, but their relative importance remains an open question. The study of how social norms among economists have shifted, and continue to evolve, in this area is an exciting social science research topic in its own right, and one that we hope is also the object of greater scholarly inquiry in the coming years.
I have been thinking quite a bit about this recently. To make a long story short:
1. Economic phenomena are rife with causal density. Theories make predictions assuming “other things equal,” but other things are never equal.
2. When I was a student, the solution was thought to be multiple regression analysis. You entered a bunch of variables into an estimated equation, and in doing so you “controlled for” those variables and thereby created conditions of “other things equal.” However, in 1978, Edward Leamer pointed out that actual practice diverges from theory. The researcher typically undertakes a lot of exploratory data analysis before reporting a final result. This process of exploratory analysis creates a bias toward finding the result desired by the researcher, rather than achieving a scientific ideal of objectivity.
3. In recent decades, the approach has shifted toward “natural experiments” and laboratory experiments. These suffer from other problems. The experimental population may not be representative. Even if this problem is not present, studies that offer definitive results are more likely to be published but consequently less likely to be replicated.
I agree with Christensen and Miguel that the norms and incentives within the economics profession are the key. For a long time, both the norms and the incentives have pulled researchers in the direction of getting certain types of results, which has pulled them away from the direction of following robust methods. That culture is very difficult to change.
Note that I recently discovered the web site The Replication Network.
Do you mean, “Even if this problem is not present, studies that offer definitive results are more likely to be published but consequently [LESS or NOT] likely to be replicated”?
The only solution is, as always, to set up competing models. The idea that a group who has pushed a discipline into a bad spot will be the ones who adjust and rearrange things to put it into a good spot is dubious. It is the same as pointing to a poorly run government agency and saying “they need to do this, and that then they will be efficient and productive”.
A journal of replicated findings would be a marginal improvement.
The great thing about “natural experiments” is that they can be extended far past issues such as policy interventions, comparative statics, etc.
I have started thinking about this in terms of “existence proof.” Denmark in 2016 (or 2010, or 1999) is possible–we have seen it.
That doesn’t mean that every country can be transformed into something with numbers resembling Denmark’s. Fukuyama was talking about this (politically or more generally) when he discussed “Getting to Denmark.”
This isn’t really an issue of economics such as “how to think about countries.” Which raises the question, to what extent *do* economists think about countries? Are their insights universally feasible, or have they assumed a country that is more like Denmark than it is like (for example) Somalia or Burma?