if the math is done right, it should then say precisely that: there isn’t enough data to resolve the parameters you’re trying to impute with any reasonable degreee of confidence. The ‘anti-math’ people seem to forget that uncertainty is itself a quantifiable thing.
This does not address the problem that Richard Bookstaber and others call radical uncertainty. Consider what the CBO director wrote concerning the agency’s evaluation of the ARRA (the 2009 Stimulus bill).
The macroeconomic impacts of any economic stimulus program are very uncertain. Economic theories differ in their predictions about the effectiveness of stimulus. Furthermore, large fiscal stimulus is rarely attempted, so it is difficult to distinguish among alternative estimates of how large the macroeconomic effects would be. For those reasons, some economists remain skeptical that there will be any significant effects, while others expect very large ones.
Note that he did not attempt to quantify this uncertainty, nor could he have done so. Note also that what Congress and the public focused on were the apparently precise numerical estimates of the CBO model, rather than the uncertainty of those estimates.
The CBO uses a standard macro model, in which there is only one type of worker in the economy. I believe that workers in today’s economy are highly specialized, and that this accounts for the difficulty in creating new patterns of trade when old patterns become unprofitable. It is easier to use math to analyze a model with one type of worker than it is to apply math to my model. I think that is an argument against the tyranny of math in economics.
Arnold here cites the great problem of translations:
What is “lost” in the translation?
To translate broad conceptual thinking into precise mathematical terminology can be expected to result in loss of thought (or at least of its depths) and loss of grasp(s) of concepts.
Sure, the Economics of the math model is over-done as it was even worse when I went to college during the late 1980s. But I rather think of math as a way of keeping Economics more grounded in reality than other social sciences. So there is a good of reasons to deny the Keynesian Macroeconomic models, it still did give a good sense of Economic activity. (Especially since Hayek in 1931 was stating prices would stabilize in 1932.) Even today, we can view the GDP numbers to understand:
1) There is slower growth for all the developed world and it is probably a combination of slowing productivity and population growth.
2) There is still faster growth in China and India who still growing on the back of the export model and a growing urban population.
Probably the main mistake is how people both read the data and predict the future. I remember the 1992 election and nobody was predicting the coming of the strongest economy in 70 years was around the corner. (Or better yet the 1970s/1980 crime wave would suddenly decrease.) But looking at data and numbers gives better indication of predicting the future. (On policy we have to remember both sides tend to over estimate impact.) I remember the following statements on 1993:
1) NAFTA would not have that significant impact on the US economy as Mexico was too small to alter our economy. That was very true.
2) Japan low birth rates and no immigration would significantly slow down the economy in long run. That one turned to really true.
“But I rather think of math as a way of keeping Economics more grounded in reality than other social sciences.”
Let’s collude on this and create a consensus. I agree with you. But on the other hand, the problem arises when people assume a subject is more grounded when it includes mathiness.
By analogy, as a new engineer, my coworkers and I figured out higher-ups were suckered for conclusions drawn from colorful finite element models. Though I can’t be sure because I never quantified it, that seemed to come at the expense of relying on the judgment and intuition of the veterans who had seen everything.
There isn’t anythinf wrong with the math (except when there is almost every time). It is the confidence people place in it.
Just to expound on radical uncertainty: you can only quantify uncertainty for the set of hypotheses in your hypothesis space. If there are possibilities you haven’t thought to consider, then you cannot quantify the uncertainty for those possibilities. And Knight distinguished uncertainty from risk by defining the former as unquantifiable.
Also, I’d highly recommend Uncertainty by Briggs, which discusses the Logical Probability school of thought (a closely related cousin to Bayesian Statistics). One point he stresses is that not all probability is quantifiable.
An amusing CBO reform would require all line charts to be displayed in bands the width of the purported confidence interval, and the director to bet half his salary – at double or nothing – on the wager that forecasts will stay within the bands.
In analysis of potnetial interventions, instead of disclaimers like in the original post, one would see two very wide ribbons that mostly overlap with perhaps a tiny slice peeking of one peeking out from the top or bottom of the other. That picture would be worth a thousand words.
This is yet another instance of my general proposal for reform to “build in betting,” another one being that laws advocated on the basis of promising certain positive results should include a self-termination provision should those claimed benefits not arive.
Obamacare had your second proposal.
@Dave. This is only one side of what has been major debate in the decision under uncertainty literature for decades.
The problem with your position (what I term Knightian),is that ignores revealed preference. People make decisions, in terms of lives and money, in Knightian regimes that imply a combination of probability estimates and risk preferences.
It turns out, it’s very hard to construct an argument against this interpretation without undermining the concept of revealed preference in general, which creates a whole host of other problems if you want to actually use economics as a framework.
But what about when policymakers are lying? We can assume an efficient market where lying doesn’t work, but they act like they think it works.
Just because people make decisions under uncertainty doesn’t mean that they are able to or even attempting to quantify probabilities. I believe the concept of revealed preferences are useful, but assuming that this implies that individuals are assigning quantitative probabilities to all relevant uncertainties is like thinking the pool player in the “pool player analogy” actually solves all the equations before taking his shot.
Umm, if your objection is that individuals actors aren’t actually doing the math, then like I said earlier, you simply don’t believe in economics at all.
Individual participants in a regular market aren’t trying to calculate the market clearing price either. They’re simply buying or selling or not doing either based on their individual preferences.
Just like buying or selling at something other than the market price, behaving differently from an implied distribution creates an arbitrage opportunity. The size of the opportunity depends on the amount of aggregation that is implying the distribution.
This isn’t really that controversial.
Oh, and a pool player may not be doing Newtonian physics in his head, but if he behaves in a way that violates Newtonian physics, he will lose. If you violate the rules of probability, you will lose too.
Somebody loses every pool match.
And people make non-optimal decisions constantly.
The difference is that I’m attempting to describe reality. You are trying to describe how your models work.
Broadly speaking, I believe people seek to optimize their ordinal utility from the available options.
But I also think people rationalize heuristically, etc. Random walk and efficient market just postulates that the errors (usually) balance eachother to average out. I believe efficiency of market is a case by case basis.