Tobias Adrian and Nina Boyarchenko write,
Value-at-Risk constraints were incorporated in the Basel II capital framework, which was adopted by major security broker-dealers in the United States—the investment banks—in 2004. Thus, capital constraints are imposed by regulation. In our staff report, we embed the risk-based capital constraint in a model with three sectors: a production sector (firms), a financial intermediary sector, and a household sector. Intermediaries serve two functions: 1) they create new production capacity through investment in the productive sector, and 2) they provide risk-bearing capacity to the households by accumulating wealth through retained earnings. The tightness of the capital constraint—measured by the maximal allowed ratio between intermediary leverage and one over the VaR on the intermediary’s assets—thus affects household welfare. When this ratio is decreased, the intermediaries are more restricted in their risk-taking and can therefore finance less investment. At the same time, since intermediaries take on less leverage and less risk, the systemic risk of the intermediary sector decreases. Accordingly, there is a trade-off between the amount of risk-taking and the price of credit in the real economy.
Pointer from Mark Thoma.
Using value-at-risk to regulate capital is one of the worst ideas ever. First, it assumes a normal distribution of returns, which is not valid far from the mean. Second, as the authors point out, it tends to be procyclical. Third, it is not a measure of the size of the loss in a bad scenario; instead, it is a measure of the size of the loss in scenario that is just a bit better than a bad scenario.
A better approach would be to spell out a specific scenario–an x percent drop in house prices, or a y percent decrease in bond prices, or something along those lines.
VaR generally does use normal distributions but it doesn’t have to. You can use whatever copulas and marginal distributions you like.
Or maybe governments shouldn’t be specifying risk limits and just let firms succeed or fail…
VaR’s use in private industry preceded any regulatory reliance upon it. In that sense it is a risk tool that has passed some market tests.
I second OneEyedMan’s remarks. VaR models need not assume a normal distribution. I’d be surprised if most of the systemically important institutions–the ones that are required to use the “advanced approaches” under Basel II–haven’t switched to heavy-tailed distributions for most of their VaR models.
By the way, the “better approach” of specifying specific scenarios is what the Fed does in its annual stress tests (CCAR). Stress tests do have their advantages, but they are no panacea.
VaR is perfectly fine if used by the private sector….why? Whether using lognormal or any other distributions, the fact is that firms rise and fall based upon the competence of their analyses and execution in the marketplace. There are thousands of variations and the quality of their approaches are discovered in the passage of time.
The government requiring firms to operate using certain required algorithms is a recipe for eventual failure (preceded by unintended consequences).
I was not suggesting it was perfect because the private sector likes it. I was explaining that it isn’t solely in use because of regulatory mandates like Basel II and III that require its use.
@OneEyedMan: You are correct, but Value-at-risk (whatever distribution is used) is agnostic about how bad things will get if they are bad.
I thought it was more common today to use Expected Shortfall or Conditional Value-at-Risk, measures that are “coherent” (according to axioms proposed by Acerbi et al.).
As Dr. Kling points out, V@R doesn’t help much when it comes to dependence. But then again, neither did copulas (another popular industry tool).