In a must-read post, he describes a number of methodological problems with the interpretation of experiments in social science, but says
There’s no problem here if you think that a large number of slightly biased studies are worse than a smaller number of unbiased and more precise studies. But I’m not sure that’s true. My bet is that it’s false. Meanwhile, the momentum of technical advance is pushing us in the direction of fewer studies.
For me, the crux of the issue is this remark from Blattman.
It’s only a slight exaggeration to say that one randomized trial on the shores of Lake Victoria in Kenya led some of the best development economists to argue we need to deworm the world. I make the same mistake all the time.
The way I would put it is that there is no such thing as a study that is so methodologically pure that by itself it can serve as a reliable guide to policy. As I wrote in What Else Would be True?, the results of any study need to be thought about in the context of other knowledge.
Often, one encounters studies with conflicting results. You tend to focus on the methodological flaws only of the studies with results that you do not like. But remember Merle Kling’s third iron law of social science: the methodology is flawed. That law applies to every study, including experiments.
Back when everyone was posting their laws (Cowen’s Laws, Kling’s Laws) I tried to come up with my own. The only one I could think of was “Never trust a single study.”