Relative Wages

Felix Salmon finds some interesting charts, from something called the National Employment Law Project.

They looked at the annual Occupational and Employment Statistics for three years — 2007, 2009 and 2012 — and created a list of wages for 785 different occupations. They then split those occupations into five quintiles, according to income; the lowest quintile made $9.49/hr, on average, last year, while the highest quintile averaged $40.23/hr.

As you go down the charts, you can see that until you get to the fourth and fifth quintiles, most jobs fall below the green lines — which means that they’re seeing their real wages fall. You can also see the commodification of low-wage jobs in the the number of occupations in the bottom two quintiles: there are just 47 occupations in the bottom quintile, while there are 186 occupations in the top quintile. (Each quintile, of course, includes the same number of total workers.)

Some remarks:

1. I would have preferred that they split the quintiles in 2007, rather than 2012. That way, you reduce the likelihood of accidental correlation between levels and growth rates. But leave that aside.

2. I would like to see employment data for the various occupations. If employment also fell in the occupations where real wages fell the most, that would suggest that what we are seeing is structural change. In fact, it would suggest that real wages did not fall enough.

3. James Tobin, in a Presidential Address to the American Economic Association over forty years ago, suggested that the Phillips Curve might be explained by downward stickiness of nominal wages when relative wages are in need of adjustment. Raising aggregate demand raises prices, and that in turn helps bring down the real wages in sectors where they otherwise would be too high.

A Grandparent Effect?

The BBC covers a study that suggests that social status depends on grandparents, not just parents.

“It may work through a number of channels including the inheritance of wealth and property, and may be aided by durable social institutions such as generation-skipping trusts, residential segregation, and other demographic processes.

Pointer from Jason Collins. He also has more.

My first thought is “mean reversion.” That is, suppose that you have two genetic types–rich and poor, call them R and P. Suppose that R and P each have children. Some of R’s children get unlucky and some of P’s children get lucky. Now the grandchildren of R still carry the R gene, so unless they are unlucky, they will revert to being rich. And conversely for the grandchildren of P. So you could observe a strong grandparent effect, based on mean-reversion and genetics alone.

But I have not read the paper.

James Hamilton on the Government (off-) Balance Sheet, and me on Scenario Analysis

He writes,

Adding all the offbalance-sheet liabilities together, I calculate total federal off-balance-sheet commitments came to $70.1 T as of 2012, or about 6 times the size of the on-balance-sheet debt. In other words, the budget impact associated with an aging population and other challenges could turn out to have much more significant fiscal consequences than even the mountain of on-balance-sheet debt already accumulated.

When Hamilton presented this paper a several weeks ago at Cato, Bob Hall and I had exactly the same reaction. The off-balance-sheet liabilities are contingent liabilities. They often take the form of out-of-the-money options. Think of the Pension Benefit Guaranty Corporation. In some states of the world, it will lose a lot of money, and in other states it will break even or make a profit. To report just one number seems uninformative. The same holds for the government’s portfolio and guarantees of mortgages and mortgage-backed securities. The problem cries out for scenario analysis, in which you present possible values for the key drivers (such as interest rates) and possible outcomes (for, say, the ten-year budget outlook).

This led to a testy exchange between me and Douglas Holtz-Eakin, who insisted that Congress wants a single number. It so happened that a couple of weeks ago I was scheduled to give an informal talk at the Congressional Budget Office (which Holtz-Eakin once headed) on a topic of my choice. I chose the topic of scenario analysis.

I said that for the purpose of my talk, we would assume that you could talk to Congress like adults. That is, anyone in a position of responsibility at a large financial corporation could understand scenario analysis. If our elected representatives, who oversee trillions of dollars, cannot handle it, then we have some really big problems. (I think, in fact, that this is the case. As an aside, I would love to have someone who thinks government is not too big explain to me why he is not bothered by the fact that you cannot have an adult conversation with the people who are in charge of it.)

So, assuming that you would not be thrown out of the room for engaging in scenario analysis, the question becomes how one should do it. I thought that the more outspoken people at CBO were a bit defensive. They said that in the case of macroeconomic forecasting, for example, they had white papers that considered many scenarios and that they reported a range of possibilities based on those scenarios. My reply was that this was not a particularly helpful way to communicate scenario analysis–it just creates a sort of smeared picture. Instead, for example, I suggested that in textbook macro terms you could look at the effect of fiscal stimulus under a scenario in which the Fed holds interest rates constant, a scenario in which the Fed uses a Taylor rule, and a scenario under which the Fed targets nominal GDP. Showing those three scenarios probably would be educational.

Returning to off-balance sheet liabilities, key drivers include interest rates, demographics, and the impact of medical technology and practice. I am particularly interested in seeing the effects of interest rates, because I suspect that a rise in interest rates would adversely affect the budget outlook for many of these off-balance-sheet items.

DSGE Models–Blogs vs. Academics

Tyler Cowen writes,

The blogosphere is more likely to criticize DSGE models, whereas the profession is more likely to see such models of as providing discipline for any business cycle explanation, Keynesian included.

…On all of these questions my views are closer to those of the specialists in the economics profession.

I count myself as strongly opposed to DSGE models. In my view, macroeconomic models are much more speculative and metaphorical than microeconomic models. Take supply and demand. In microeconomics, I believe that when you draw a supply and demand diagram, you are providing an interesting theoretical description that has empirical use. But “aggregate supply and demand” does neither.

DSGE constrains macroeconomic models to describe a “representative agent” undertaking “dynamic optimization.” This constraint does not make macro models any less speculative or metaphorical. The advocates of DSGE implicitly claim that a certain mathematical approach is both necessary and sufficient to make macro models rigorous. I view that claim as a baloney sandwich.

Math Tests and Mortgage Default

The story is here and http://blogs.discovermagazine.com/d-brief/?p=1771. The claim is that mortgage borrowers with poor math skills defaulted at a much higher rate than other mortgage borrowers.

Levels of IQ and financial literacy showed no correlation with likelihood to default, but basic math skills did.

I refuse to draw the inference that the reason these folks defaulted was that they misunderstood math. First, we are talking about a sample size of 339 borrowers. That is a very small sample. Second, there are a bunch of explanatory variables that are highly correlated: credit score, IQ, and math score. That makes it much harder to separate the effect of any one of those variables. Someone else using the same data might try slightly different specifications and get very different results. Especially in such a small sample.

Do I think that low math skills are correlated with default? Absolutely. Do I believe that there is a high marginal contribution to default of low math skills, conditional on other known factors such as credit score (and IQ, if known)? Not until this sort of study is replicated in other samples using other specifications.

Supply-Side Housing Policy

From the Center for an Urban Future.

An Accessory Dwelling Unit is a small, self-contained residential structure sharing a lot with an existing house. In Seattle, Vancouver and Santa Cruz, legislation was enacted to permit ADUs on sufficiently sized lots in one- and two-family zones. Building regulations were also relaxed to allow formerly illegal subdivisions to be safely brought to code without facing severe fines.

They argue that this could be useful in the outer boroughs of New York city. Overall, our country’s policy on housing is to raise demand (think HUD, Freddie Mac, Fannie Mae, etc.) and restrict supply (think urban zoning laws). We also do that in higher education and in health care. The result is what you would predict, given the laws of supply and demand.

Manzi on the Oregon Medicaid Study

Russ Roberts draws him out. Much of the focus is on the fact that almost half of the people who won the lottery to obtain Medicaid coverage then did not apply.

What that means, Manzi suggests, is that the group of people who obtained Medicaid coverage, rather than blow it off, may have been different from the control group that lost the lottery. That is, you don’t have two groups–winners and losers. You have three groups–losers of the lottery, winners who took coverage, and winners who did not take coverage. Manzi is saying that one cannot be sure that the winners who took coverage are comparable to the losers, which makes the results difficult to interpret.

Previously, Russ interviewed Austin Frakt on the study.

Productivity Measurement Pessimism

Timothy Taylor reports on a symposium on productivity trends. He quotes Robert Gordon,

I have often posed the following set of choices. Option A is to keep everything invented up until ten years ago, including laptops, Google, Amazon, and Wikipedia, while also keeping running water and indoor toilets. Option B is to keep everything invented up until yesterday, including Facebook, iphones, and ipads, but give up running water and indoor toilets; one must go outside to take care of one’s needs; one must carry all the water for cooking, cleaning, and bathing in buckets and pails. Often audiences laugh when confronted with the choice between A and B, because the answer seems so obvious.

I think that what this anecdote indicates is that measured productivity is bunk. Gordon’s anecdote suggests that people derive a lot of consumers’ surplus from modern water systems. But this consumers’ surplus does not show up in measures of productivity, either for one hundred years ago or for today.

I am becoming a productivity measurement pessimist. That is, I am becoming pessimistic that what we call “productivity” is anything more than a crude indicator of trends in living standards.

I can imagine coming up with an accurate measure of productivity in soybean output. However, it is difficult to imagine coming up with anything accurate for health care, where we have little idea about what generates value at the margin, or for education, we where have almost no idea at all.

Moreover, the value of many goods and services, including the Internet and modern water systems, is under-estimated because we do not measure consumers’ surplus. Going forward, suppose that researchers come up with a way to prevent or cure Alzheimer’s. The effect on consumers’ surplus would be quite large. The effect on measured productivity? To a first approximation, nil.

In assessing economic progress, productivity may be the best indicator we have. However, we take small differences in measured productivity growth rates way too seriously. On the one hand, it is correct to say that if you extrapolate a difference in productivity growth of 1 or 2 percentage points over thirty years, it accumulates to a big number. But I fear that it is quite possible that the error in measuring productivity growth can exceed 1 or 2 percentage points for thirty years or more. That is, I think it is quite possible to take two thirty-year periods and arrive at a very large estimate of the difference in the rate of growth of living standards that is entirely due to mis-measurement.

The Null Hypothesis in Health Insurance

is that, in the United States, better health insurance produces no difference in health outcomes. Recently, for example, Katharine Baicker, et al, found

This randomized, controlled study [in Oregon] showed that Medicaid coverage generated no significant improvements in measured physical health outcomes in the first 2 years, but it did increase use of health care services, raise rates of diabetes detection and management, lower rates of depression, and reduce financial strain.

Pointer from, well, everyone. All I can say is that this is really separating what David Brooks calls the “detached” from the “engaged.” The latter are making an all-out effort at what I call trying to close minds on your own side.

Somewhat detached commentary includes

Tyler Cowen, Ray Fisman, and Reihan Salam.

Robin Hanson has an even stronger version of the null hypothesis. His version says that differences in health care spending produce no difference in health care outcomes. He and I disagree about how to characterize this result. Let me try to explain how we differ. Let us stipulate that:

1. Some medical procedures improve health, but not in a way that shows up in statistics. For example, if you get your broken arm fixed, you are much better off than not getting it fixed, but this will probably not show up in measured statistics of health outcomes, including longevity.

2. Some medical procedures are a waste (futile care, unwanted care, treatments of non-existent ailments, treatments that do not work, and so on).

3. Some medical procedures have an adverse effect on health.

4. Some medical procedures improve health outcomes, but only with a low probability (e.g., precautionary screening).

5. Some medical procedures definitely improve health outcomes in a measurable way.

Note also, that most studies of medical spending are not controlled experiments. In observational studies, including cross-country comparisons, the results tend to be dominated by a 6th factor, namely that health outcomes are determined much more by individual genes and behavior than by medical intervention.

Robin and I agree that (5) is true. The question becomes, how does (5) wash out in the statistics on differences in spending? His view is that there has to be enough (3) to offset the (5). My view is that it is mostly that (1), (2), and (4) serve to dilute (5). If I am correct, then researchers should find some quantitative differences in health outcomes, but these differences will not be statistically significant. Out of (bad) habit, they will report this as “no difference in outcomes.” This makes it sound as if they have proven the null hypothesis, when they have merely failed to reject it.

Of course, in a large study (as this was), there may not be much difference between failing to reject the null and proving it. The confidence interval around zero could be small (if someone has access to the paper, you can let me know).

Type I errors, Type II errors, and Congress

The Wall Street Journal reports,

Lawmakers of both parties questioned Sunday whether law-enforcement officials did enough to monitor the activities of suspected Boston Marathon bomber Tamerlan Tsarnaev before last week’s terrorist attack, given his apparent extremist beliefs.

The failure to stop Tsarnaev was a type I error. However, there are probably hundreds of young men in America with profiles that have at least as many “red flags” as he had, and few, if any, are likely to commit acts of terrorism. One sure bet is that for the next several years we will see a lot more type II errors, in which the FBI monitors innocent people.

Speaking of Type I errors, type II errors, and Congress, I will be testifying at a hearing on mortgage finance on Wednesday morning for the House Committee on Financial Services. Part of what I plan to say:

It is impossible to make mortgage decisions perfectly. Sometimes, you make a reasonable decision to approve a loan, and later the borrower defaults. Sometimes, you make a reasonable decision to deny a loan, and yet the loan would have been repaid. Beyond that, good luck with home prices can make any approval seem reasonable and bad luck with home prices can make any approval seem unreasonable. During the bubble, Congress and regulators beat up on mortgage originators to get them to be less strict. Since then, Congress and regulators have been beating up on mortgage originators to be especially strict. I expect mortgage originators to make mistakes, but the fact is that they do a better job without the “advice” that they get from you.

Here is my talk on type I and type II errors for my housing course.