Perhaps we may think of rigor as having two dimensions.
One dimension is high standards for verifiability. For example, in mathematics, a theorem is verified using a logical proof. In physics or chemistry, a hypothesis is verified through the process of controlled experiments.
A second dimension is open inquiry. That means that one is allowed to entertain a heterodox hypothesis, free from social pressure. If I face pressure to discard a hypothesis, that pressure comes from its failure along the verifiability dimension.
Some remarks:
1. In social science*, it is much harder than in physics or chemistry to maintain high standards of verifiability. The problem is causal density. The physicist does not have to worry about individual free will or cultural evolution, both of which add considerable causal factors to the problems studied by social scientists.
Consider the question of how to prevent another Challenger disaster. The technical engineering answer seems to be satisfactorily verified. But the organizational-behavior dimension of the problem is more contestable. Would a better formal process have led to a better decision? Or was a different cultural mindset necessary?
*As you know, I despise the term “social science,” and if it were up to me that term would be replaced by “cultural analysis.”
2. Consider examples of four quadrants;
high standards of verifiability along with open inquiry: Call this quadrant “rigorous.” Think of STEM subjects, except where open inquiry is somewhat impeded by “establishment” scientists restricting access to money and status.
low standards of verifiability along with open inquiry: Call this quadrant “faux rigorous.” Think of experimental psychology, where careless methods resulted in the “replication crisis.” Or think of multiple regression in economics, which was discredited by Edward Leamer’s critique. In fact, economists since 1945 have been engaged in collective self-deception, believing that their standards of verifiability were high when in fact this was not the case. Their mathematical models lacked a tight relationship to reality, hence what Paul Roomer dubbed “mathiness.” And their empirical work, while more careful in recent years, still is unable to definitively answer many important questions.
high standards of verifiability but without open inquiry: Call this quadrant “willful blindness.” Think of research related to IQ. Not all of this research meets high standards of verifiability, but some of it does, and even the solid research gets dismissed and denounced.
low standards of verifiability along with barriers to open inquiry: Call this quadrant “dogma.” I think that a lot of sociological theories of power and oppression fall in this quadrant. Climate science is far from my area of expertise, but my intuitive guess is that there is at least a 25 percent chance that it falls in this quadrant.
3. I believe that the trend is for social science to be less rigorous and more dogmatic. I see this as the central tragedy of our intellectual life.
Kling has made the causal density argument before and I’ve responded that I think the Social Sciences should take a page from Engineering. Kling hates/hated this suggestion because he considers “social engineering” the ultimate sin that infects the Social Sciences.
I think the difference is in how I view the processes by which engineering solutions emerge. Let me try to make the same argument with a different analogy. Engineering is applied physics (e.g. statics, dynamics, electromagnetics) and some applied chemistry, especially materials science.
Medicine and Agricultural Science are effectively applied biology. They are similar to engineering in that the deterministic behaviors discovered in biology/biochemistry/molecular-biology labs become non-linear when combined with the unconstrained variables that are part of the real-world.
The real-world application of the underlying verifiable science tends to be more Social-Science-like with low causal density yet over time they tend to converge on solutions that appear wholly deterministic. My claim is that all of these applied STEM fields use various process and feedback mechanisms that are different than the verifiable mechanisms used by the underlying hard sciences.
It is the techniques of the Applied Sciences that the Social Sciences can learn the most from. No one claims that Agricultural Science is rigorously deterministic yet over time the body of accumulated knowledge does tend to converge on deterministic-like solutions. The development of Canola Oil was quite plausibly a disaster waiting to happen (like hydrogenated oil?) yet over time only the most eccentric views consider it unsafe.
I think the “two dimensions of rigor” model is a good foundation but it is slightly too simplistic. It ignores the examples of how the applied sciences solve real-world non-linear problems.
The problem isn’t just causal density; there’s also interdependence. Social researchers are part of society. Give them the power to reorder society, and they’ll predictably misuse it. Without that power, they can’t experiment.
We can get some results with small scale experiments (e.g. charter schools); they become corrupt fairly often, but not at a level that does society-wide harm. The successful experiments may point in useful directions. However, causal density (and more subtle forms of corruption, like biased results) limit our ability to draw general conclusions.
Food scientists, doctors, and engineers are very much part of society too. We never speak of giving them power to remake society. There have been spectacular failures in all of the applied sciences, especially when these fields are new/young, but these failures are treated as important data points rather than talking points to be danced around.
I don’t think the Social Sciences need power to experiment anymore than the Applied Sciences do. Understanding problem scope and the solution space should be fundamental to both sets of disciplines.
The problem scope for social science is, well, society. To experiment they would need power over society in much the same way a food scientist needs power over ingredients or an electrical engineer needs power over circuits. How else could they experiment?
Personally, I think societies based on slave labor, particularly those where the children of slaves automatically become slaves themselves, are undesirable. Is there a way to verify this strange strong opinion is valid? Is there some region or arena in which I and opponents can openly debate the “truth” of this notion?
Sure, I can say that now and people won’t argue much, but a couple hundred years ago? In Richmond Virginia instead of Richmond California? I don’t think so.
Ethics and morality and maybe the way we practice the religions we observe would seem to be important but outside the limits of the arena on which our minds can legitimately play.
Or so it seems to me.
I would draw the opposite conclusion. If our minds didn’t play with ethics and religion, ethics and religion would cease to exist. They exist nowhere in nature, except in our heads. As far as I can tell, anyway.
It is pretty much a truism that “science tells us what is; it can’t tell us what ought to be.” Similarly, “you can’t get an ought from an is.” If you keep asking “why is that wrong?”, you eventually come to, ” I can’t give you a reason; it just is” (or more likely if you are talking with someone, “If you don’t think that’s wrong, there’s something wrong with you.”). This was earth shattering when I encountered it as a 16-year-old in love with science.
The best you can do is build a system like Euclid built his geometry, with definitions and axioms that seem right and are just accepted. “A line is the shortest distance between two points” and “parallel lines never meet” are two of Euclid’s beginnings. “X is defined as [ ] and Y is defined as [ ]. X is right and Y is wrong.” can start a system of right and wrong.
It is customary to think nowadays that morals are narrow-minded and limited. One should not impose her morals on anyone else. But everyone is supposed to behave ethically. In fact, things like the human genome project set aside money to employ ethicists and philosophers. But no system of ethics can be developed without unproven–and unprovable–foundations. It is simply morality that dare not speak its name. (And whose explication involves philosophers and ethicists rather than priests and theologians. Or perhaps more accurately, philosophers acting as priests.)
I certainly prefer cultural analysis over social science, but not by much.
The too-long word entrepreneur is unique and precise, but at 5 (French) syllables, hard for most folk to use. Owners is too wide, Firm Founders … sounds like special underwear or something. I have no good suggestions.
Verifiability and open inquiry both are necessary. And probably sufficient for “rigor”.
One of the advantages of dogma, especially when it’s good dogma, is that it allows those of less than average intelligence an easy way to follow along with near-optimal others in their behavior, and get reasonably good results.
So why do so many elite continue to support the verified bad socialism, and often even oppose open inquiry about it? Because altho it’s bad dogma, it does allow lesser intellects to follow it.
I don’t know about particular processes, but I think Feynman hit the nail on the head regarding the cultural mindset: “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”