James R. Barth and Stephen Matteo Miller write,
Testing whether it is good policy to increase bank capital requirements from 4 percent to 15 percent requires calculating and comparing the benefits and costs of such a change. Across all tested cases, it becomes clear that the benefits of increasing the capital ratio from 4 percent to 15 percent equal or exceed the costs.
This is an interesting example to discuss.
1. I am very confident that I could find problems with their methodology. This is an area in which empirical analysis is much less definitive than the authors suggest. I would say that their abstract is an example of lack of humility.
2. Nonetheless, I am very sympathetic to their conclusion.
3. In fact, many economists, left and right, are sympathetic to their conclusion. It would be hard to find a prestigious academic economist who is opposed to higher capital requirements for banks than what we have now. Unless these guys count.
4. But I bet that in fact capital requirements for banks will remain low, almost surely with obscure loopholes that make them even lower than the stated levels. It would not surprise me to find that capital requirements are so low that they are not binding, meaning that many banks will maintain capital ratios well above the minimum.
5. Speaking of my opinions, in Specialization and Trade I claim that government intervention in markets generally consists of subsidizing demand and restricting supply. This is inconsistent with any optimal intervention to address market failure.
6. Another presumption of mine is that housing policy will be dysfunctional. In addition to subsidizing demand and restricting supply, it will discourage saving and instead encourage indebtedness.
My claims in (4) -(6) might fall under the heading of “empirical public policy.” That is, what sorts of public policies can we expect? These questions are under-researched. On the other hand, economists over-research the topics of market failure and optimal policy solutions.
Implicit in this research imbalance is a very optimistic view of government intervention. It helps ingratiate economists with people in power. In effect, the economist says to the politician, “You are a wonderful public servant. I, the wise technocrat, am here to help you in your benevolent endeavors.”
Thus, the empirical policy economist is both obsequious and self-flattering. What gets lost is the opportunity to provide the public with a realistic comparison between the political process and the market process.
The fiction is there is a market process at all. There is only a political process so nothing realistic to compare.
An analogy:
In a perfect world where everyone is peaceful and law-abiding, the hiring of a security guard is a tremendous waste of resources. Someone is being paid to literally stand around and do nothing, which requires the resources of someone else’s labor to cover. Cost + opportunity cost is the productive capacity of 1.5-2 people.
Of course we don’t live in that world, we live in the one in which there are indeed thieves and thugs. Conditional on a baseline loss/damage/theft rate, hiring a security guard becomes eminently rational. The presence of a security guard at, say, a jewelry store, only has to prevent theft at the rate of about 1 incident every few months or so to become net beneficial to the store. In some sense this is still massively wasteful of human capital, but the blame belongs to the thieves who make it a rational waste, not to the store behaving in a conditionally rational way.
Everything gets massively more complicated the second insurance gets involved. With no insurance, the store owner decides how much security to pay for based on local risk factors he probably knows pretty well, and bears the costs of any mis-estimation. As soon as he insures his merchandise though, he has the following incentives: 1) skimp on security costs, since the insurance co bears the cost of theft and the owner keeps the benefit of lower costs 2) systematically misrepresent the risk factors to the insurance company, in order to keep his premiums low. Naturally, the insurance co isn’t stupid and knows these things, so then will naturally insist on mitigating stipulates that 1) the store will be required to still hire security guards 2) will come up with its own independent risk assessment upon which to base the premiums and 3) will only offer partial insurance (via deductible or partial-reimbursement formula) to limit the harder-to-define areas moral hazard might affect.
Here’s the thing: there’s a lot of effort in that insurance process that is wasteful in a way analogous to how the security guard is wasteful: It takes labor to collect and analyze the data to come up with the risk assessment (which probably will be marginally inferior to the store owner’s judgement), more labor to design the deductible/reimbursement structure, and since the insurance company’s incentives are the reverse of the owners, it’s likely they’ll mandate more security guards than is optimal. Last, the insurance co will have to use resources to monitor the owner’s compliance with the contractual stipulations. Given the position of the insurance co and the incentives of the store owner, it is obviously conditionally rational for the insurance co to act this way, but in a perfect-honesty, perfect-information world world they would be superfluous and tremendously wasteful.
Since the store owner is ultimately the insurance co’s customer, one way or another he bears all those costs. If those get high enough, then it can become better to just go with a high deductible, catastrophe-only type of coverage that limits his own moral hazard, and keeps the insurance co out of having to monitor.
That’s basically the situation over bank capital. High levels of bank capital are wasteful in the way a security guard is wasteful- if money is ‘real’ it represents a claim to real resources, and having lots of money sitting around as bank equity capital represents a corresponding amount of real resources essentially left idle. But like the security guard, that is eminently rational given the inherent riskiness and uncertainty of the world as it actually exists.
Bank capital actually wears two hats in this analogy, since in addition to being ‘security guard’ waste also is indirectly a stand-in for the deductibility level of the insurance policy.
There are two basic insurance frameworks competing here: the low-deductible / high intrusiveness model on which Dodd-Frank is premised, or the high-deductible / low intrusiveness model economists generally prefer and underpins the reforms Hensarling wants to make. Empirically comparing the two is, in theory (yes, I get the irony of saying “empirically, in theory…”), just a matter of comparing the various forms of waste involved and picking which one minimizes them. The problem is that there’s no good way to quantify the costs of sub-optimal business decisions imposed by the high-intrusiveness regime, since they are a form of opportunity cost or deadweight loss, this size of which is inherently indeterminate.
Bank capital isn’t idle. It funds assets on the bank’s balance sheet. A = L + E.
High bank capital requirements reduce the bank’s ability to create new loans. Suppose a bank has $100 billion in assets and $15 billion in capital/equity for a 15% ratio. In order to grow its assets, say to $115 billion, the bank also needs to grow capital by $2.25 billion in order to maintain the 15% ratio. Without the 15% requirement, the bank could simply make another $15 billion of loans on the same $15 billion of capital, allowing the ratio to fall to 13%.
It’s only a real problem if the economy really needs the extra credit but banks are capital constrained and cannot provide it.
Should they not say calculating or modeling instead of “testing”?
Should capital requirements be a number, or should it be a function? A function of bank size, for example?
Point #4 suggests a trial-and-error approach to those public policy problems that could be susceptible to the approach. If bank capital requirements are non binding, then raise them in small increments, say 1% per year, until they become just binding, then cut them 1% from there. Since conditions change run the experiment again from time to time. This approach requires relatively analysis or assumptions. But I do admit it is a naive approach if mathematically parsimonious.