There is something that I find troubling about the Nobel Prize for Hart and Holmstrom, and I want to try to articulate what it is.
Think of their work as consisting of three steps.
1. Identifying some real-world complexities that affect how businesses operate. For example, output may result from both effort and luck. Output may be joint. A worker’s job description may include more than one objective.
2. Construct a mathematical optimization model that incorporates such complexities.
3. Offer insights into designing appropriate compensation systems, including when to outsource an activity altogether.
A big question is: how important is step 2?
In the eyes of the mainstream economics profession, it is extremely important. Without it, you either do not get to step 3, or your claims in step 3 lack reliability and credibility. Step 2 is why Hart and Holmstrom earned the Nobel Prize.
In my view, step 2 is unnecessary. If anything, it tends to get in the way, often creating a barrier to doing step 1 properly, because economists limit themselves to what is mathematically tractable. I think that Hart and Holmstrom sometimes (often?) made good choices in step 1, and that is what accounts for the value of where they arrived at in step 3.
In Specialization and Trade, I offer a number of asides that go from step 1 to step 3 directly (I will put some examples below the fold). In these asides, I am looking at Hart-Holmstrom issues. But I do not think in terms of mathematical optimization. Instead, I think in terms of a dynamic process of trial and error. A manager tries an approach to compensation. As long as it seems to work, it persists. Once it gets gamed too much by the employees, something happens–the manager makes changes, the manager gets fired, or the firm goes out of business.
Another point is that I believe that managers closer to the problem do a better job of solving it. Writing the problem down in mathematical terms makes it seem as though you can solve the problem remotely. It leads a David Cutler to believe that the government can design a compensation system for doctors that will correctly incent “quality health care.” It ignores what I call the “regulator’s calculation problem.”
I have seen several George Mason economists, including Tabarrok, Cowen, and Boettke, praise the Nobel for Hart and Holmstrom. I certainly think that the Nobel committee could have done worse. But in the end, I think Hart and Holmstrom represent a way of doing economics that is too constrained by the arbitrary requirement to use math, too focused on optimization relative to a given problem rather than the dynamics of trial and error, and too inclined to suggest that decisions can be made effectively by remote algorithms (and potentially by regulators who might use such algorithms) when in fact local decision-makers have important information that is not available remotely.
p. 53-54 (I cite a paper by Alchian and Demsetz, which also uses no math)
Think of the firm as a team in which the output of any one individual is difficult to value. Consider a computer programmer working on part of a bank’s software system. No one can state precisely the value to the bank of the particular section of code that the programmer works on. All that we know is that the bank cannot pay programmers too much, or else it would be unable to make a profit. and it cannot pay programmers too little, or they would choose to work elsewhere.
If it is possible to attach a precise value to a particular segment of work, then it is possible for that work to be broken out of the firm and outsourced to the market. Thus, if a bank can assign a precise value to a particular software system, it has the option of contracting with an outside firm to build the software for an agreed-upon price.
In short, when the value of different tasks can be isolated, specialization will tend to take place between firms, coordinated by the price system. When the value of a particular task is difficult to measure, because its value varies a great deal depending on how it is combined with other tasks, specialization will tend to take place within a firm, governed by instructions.
p. 72-73:
In a market economy, self-discipline tends to be rewarded with higher income. Sooner or later, the boss figures out who is working and who is shirking. If the boss cannot figure it out, then sooner or later that boss will be fired. Or if the boss is not fired, then sooner or later a competing firm will make it impossible for the inefficient firm to survive.
…within any one firm, workers try to game the system. They try to get the most pay with the least effort. Managers must constantly revise and improve their methods for determining bonuses and other rewards to try to ensure that the incentives in fact lead employees to work harder and more effectively.
I think of the ‘math’ translation as ‘abstract and generalize.’ With ‘informal math’ such as you cite, the basic benefits are the same as with math, the basic pitfalls are also the same. Finding the right ‘math’ for the right problem is indeed challenging at times, which is why applied math can drive pure math advances. Skipping that by using the wrong math can create issues.
An alternative to thinking through the math for the application is a ‘model selection’ strategy. Arbitrary models are searched/fitted heuristically and then tested on new data sets. This requires vast data, which is fine when data is free, but at almost any marginal price becomes problematic.
A real alternative to the ‘math’ which better fits what you describe is observing a large number of firms without formally parameterizing their challenges and using the evolutionary processes which created them as the search across strategy space. This has obvious issues in terms of bounding the problem (choosing what cases to include) and confounding, but does avoid the math and the simplifications.
Specialization and Trade is very attractive not because it avoids the math but because is obviously suggests an alternative math. Instead of an optimization problem across a necessarily few variables, macroeconomics becomes a large scale coordination problem in high dimension on a large network. It changes the structure of the problem and the questions. It becomes less obvious how to ask questions about small pricing changes in commodities, and more obvious how to ask questions about distribution, marketing, diversity of manufactures (long tail, etc). This is not ‘math free’ nor is it obviously tractable using basic calculus; which is fine, because somebody can get the Nobel for developing or finding, and adopting, the right math.
As a friend on Twitter pointed out, this prize could have easily gone to Alchian and Demsetz for their more fundamental work on this issue published in 1972. I suspect that part of the reason it didn’t is that 1) Alchian is no longer with us and 2) their work is not mathematical. A sad state of affairs.
I’m not diminishing Hart and Holmstrom’s work on this, I’m just saying Alchian and Demsetz (1972) were the real pioneers.
One point in favor of the mathematization which I haven’t seen discussed so far is that it provides specific policy advice when used as the basis for structural econometric models. Specifically, these principal agent problems yield optimization conditions that can be used as moment conditions for statistical estimation.
For example, the theoretical models formed the basis for estimation in this paper: http://www.simon.rochester.edu/fac/misra/mkt_salesforce.pdf. The authors offered specific compensation schemes to a major firm which actually implemented them. I’m guessing you’d be skeptical of such a complex model, but it resulted in millions of dollars in incremental annual profit. This type of specific policy advice would not be possible without the theory guiding the estimation routines.
Hi Arnold,
Can you offer us some informal, verbal proofs of the results Hart and Holsmstrom, to show us that the math is indeed unnecessary?
Thanks,
-Sam
Many of the descriptions at the posts at Marginal Revolution explain the intuition behind the results without using math. Alex’s post on performance pay uses math, but the intuition is pretty simple. If output depends on effort plus luck, and you have some other measure of effort, then you want to base pay on that measure of effort as well as output. As I wrote in my previous post on this year’s Nobel, evolutionary psychologists think that hunter-gatherers had this intuition–and they sure as heck did not use any calculus!
OK! So in your opinion intuition is sufficient. As long as we can tell an intuitive story about something, that is as good as proving it?
More generally, consider all possible contracts, or all possible mechanisms, in an economic problem. Can you give an example where we say something meaningful about ALL of these (for example, one contract is optimal in a particular sense) without math? Can you prove Arrow’s theorem (which consider all possible voting systems) with words and intuition? Or the Revelation Principle?
My thoughts on it go something like this:
Yes, clearly math can be useful. “Math” in the sense of Arrow’s Impossibility Theorem and in the sense of an example of Ricardo’s international trade theory. Obviously those are two different types or applications of math in economics.
However, there is far too much of an obsession with math in the profession. That is, there is too much reliance on it (in the sense that you can miss important things and that it can’t get at complicated concepts like causality) AND that other valid methods of analysis such as graphical or plain English are only considered acceptable IF they accompany math.
As Bryan Caplan pointed out in his posts on this a few years ago, mathonomics simply fails the cost-benefit test.
BenK hits on an important topic.
Optimization is a framework for making decisions subject to constraints. If you have constraints and a clear objective, you can formulate an optimization problem. There is no requirement on centralized information patterns in optimization. An auction is an example where information is decentralized and agents coordinate through a price variable. Auction dynamics are a systematic algorithm for solving a particular optimization problem. If you are upset about a central planning problem formulated through optimization, you are upset at central planning, and not at optimization, or at mathematical modeling.
Decentralized/distributed optimization is now studied quite heavily. Agents make local decisions and need only coordinate through some global variables (e.g., prices). The way agents make local decisions can be modeled as generally as one likes; very weak assumptions allow one to draw interesting conclusions. We have tools (e.g. convex optimization, complex systems) to study these problems.
The central-planning flavoured optimization is a relic. The “good people” have realized this for a long time and have already formulated new approaches to studying the networked macroeconomy.
Very well said! Completely agree!
Now, it would be a different matter if the mathematical model lead to some testable predictions, which could either validate or invalidate the (economic) assumptions in Step 1. But most of the time:
(a) The models do NOT readily lead to such predictions; or
(b) If at all, the mathematical assumptions are more likely to be validated or invalidated than the economic ones.
A few random thoughts on the discussion about economics that is too “mathy.”
1. Oliver Williamson won the prize several years ago covering much the same issues as this year’s winners. The vast bulk of his award-winning work appeared in books published outside the normal channels of refereed journals.
2. Sydney Winter at Yale back in the ’70s wrote extensively about evolutionary firm and industry behavior using, well, mathematical models.
3. Let’s not throw the baby out with the bathwater. The value of models, whether they are verbal or mathematical, is to make sure a line of reasoning is logically consistent. Oft times, math is a straightforward way to enforce this logical consistency. Certain types of math models also lend themselves to empirical investigation or corroboration. That is harder to do with verbal models that are empty of measurable variables.
I was going to mention Sid Winter (and his collaborator Dick Nelson) as well. There is nothing wrong with assuming the world works by trial and error and evolution brings us to stable evolutionary solutions. What the math does is: (1) characterize the range of possible solutions (at the risk of missing interesting solutions through the concision of a mathematical statement) and (2) *explain* the solution in a way that “people stumble around trying things and some things work better than others and get copied” can’t quite do, though admittedly only to the people who really understand the math, not just the ability to solve equations.
Much of Oliver Williamson’s books consisted of reworked articles that had previously appeared in refereed journals.
Neglected to mention that Oliver Williamson’so books contain virtually no math at all.
Private knowledge of the agent is the primary reason to have contract theory in the first place. While it is true that agents know better what is best for them (and that is often meant by “type” in contract theory), they are not always able to achieve it and even less able to efficiently do it jointly with other people. There are books and heaps of research on running, eating, sleeping and while many people kinda know what is best for them in those everyday things and live without scientific prescriptions, achieving advanced goals there requires the help of science. Business is no different.
Mathematics has a problem – availability bias. It is very real, we would be much better off quantifying precisely not abstract debt, but real environmental impact. However, it is not possible, we have only second best. Good non-mathematical model can be mathematised (may be in several ways), but it not always can be solved. It raises the question of “How can we be sure in the model’s predictions if we cannot be sure what they really are?”
The problem of avoiding math is often the model is implicit in thinking and avoiding math just allows never having to face its problems, until someone is fired or the the company goes out of business. In this local knowledge can be deceptive, allowing short term progress that actually leads to long term failures. I often think about Minsky like this where near term stability leads to far term instability by neglecting large scale effects. Formulating models has constructive effects on what data is gathered that can assist in determining its limits and extending it to areas it wasn’t considered and can turn up surprises that weren’t anticipated. No model will never beat a model, but a better model can.
Arnold,
I have great appreciation for Armen Alchian. I also have great appreciation for Jack Hirshleifer. There is much to be learned from each of them. Yet, they have vastly different styles. Alchian often pushed math to the appendix or excluded it altogether. Hirshleifer was a traditional “model-builder.” So they each made contributions in their own way. Is one way better than the other? If so, why? Could Hirshleifer have dispensed with all the math and graphs and just used English?
I can read Hayek’s The Use of Knowledge in Society and get an idea about how markets work. I can also get a sense of how markets work with a supply and demand graph. Are they substitutes or complements?
Is math ever useful? Why or why not?
Is the English language ever limited? If so, in what ways are these limitations different from those of math? If not, why not?
My sympathies are broadly with Arnold in that I think we are a bit too mathy, or at least a bit too resistant to arguments/intuitions that are difficult to write in mathematical language.
However, I do think that it would have been difficult for Hirshleifer to (i) discover, and (ii) convey some of the results in (say) his economics of conflict research if he did not use math. Maybe I’m doing him a disservice.
The problem or concern with the mathematization of economics is not new. If my memory does not fail me there was a committee or something at the American Economic Association studying this issue (early 90s, I think). For example, that committee was dismayed that students came out of grad school knowing how to solve a complicated gral equilibrium model, but those same students could not answer questions like “what happens to the price of scissors when the price barbers charge increases”.
Having said that… math is a logical system, probably the logical system par excellence. It forces you to write down your premises clearly, and draw conclusions in a logical way from those premises.
Verbal intuition is nice. But how do you know that you are not incurring in a logical flaw? Math helps you check for that.
I am not defending the mathematization. I am saying that math has its uses, and of course, since it can be used, it can be abused as well. Just pick up a copy of JET (Journal of Economic Theory) and you will see.
“But I do not think in terms of mathematical optimization. Instead, I think in terms of a dynamic process of trial and error.”
I like your style.
Speaking of Alchian, one of his earliest papers were on trial-and-error and evolution. The solution to a problem is constrained optimization. The agents need not actually do the math. They experiment, the environment weeds out the inefficient via natural selection and competition, and the winner is the agent doing what the math describes. The math describes the final equilibrium without any agent doing the math. Trial-and-error learning complement optimization, not replacing it.
This gets back to Milton Friedman’s famous (or infamous) ‘as if’ argument from 63 years ago. Sometimes the ‘as if’ logic is legitimate, and sometimes it’s not, and it seems to depend a lot on the scale of the situations and decisions being modeled. But in this case it seems plausibly valid to me.
Let’s say you wanted to know the ideal shape or optimal curvature for a claw, talon, or piercing canine or incisor. Looking at images like this shows us that nature has been rediscovering the solution for at least a hundred million years. It has been doing so in an evolutionary context with (1) lots of variations, (2) chaotic trial and error, (3) harsh competition, (4) preferential selection of closer approximations to the ideal.
But the thing is, there really is a kind of ideal curve for the purposes of a rigid object acting like a claw (again, as evidenced by all that convergent evolution). And it’s the kind of thing someone with some basic familiarity with engineering-level calculus can solve for themselves.
So, let’s say you were looking at the biological landscape during the very beginnings of the emergence of pre-claws. You could solve your equations based on your physical models and guess that, most likely and eventually, the claw-like appendages would approach the ideal curve. You would be right; and you would pursuing a valid ‘as if’ approach.
Now, it doesn’t strike me as completely crazy to stretch the analogy to micro-economical business decision circumstances.
Just like billiards players aren’t really solving problems in mechanics, or dogs aren’t really computing ballistics curves to leap to intercept a Frisbee, or farmers aren’t really solving constrained optimization problems using Lagrange multipliers, you really can get pretty far by assuming you’ll eventually observe behaviors ‘as if’ they are, because deviations will gradually get weeded out of the mix in favor of more adaptive / competitive alternatives.
This is probably the best response I’ve ever seen to Friedman’s pool player analogy:
http://ageconsearch.umn.edu/bitstream/130654/2/RichardLevins.pdf
This raises a common issue. We often assume others think like we do, that we may differ in knowledge and values but hold similar models and have similar goals such that if we can just share what we know, discuss and debate issues, we can convince and persuade others, thereby arriving at common knowledge, and, if we had common values, arrive at similar conclusions to achieve those goals. This is likely wrong. We probably differ as much in models as anything else and often these are implicit and unrecognized even to ourselves. When revealed, what we believe to be intuitively obvious, will not only be not at all obvious, but not even be believed, for they will have their own far different model.