Hydraulic Modeling

[Irving] Fisher designed a hydrostatic machine to illustrate the economic “‘exchanges’ of a great city” that revealed the ways that the values of individual goods were related to one another. When Fisher adjusted one of the levers, water flowed to affect the general price level of the range of goods. The device resembled a modern-day foosball table but with various cisterns of different shapes and heights representing individual consumers and producers….A series of levers along the side of the machine altered the flow of water, thus changing the price level not only for an individual but throughout the entire economy. The machine revealed the way in which prices, supply, and consumer demand interrelated. For example, if the price of a good fell (and the level of water rose), more consumers would purchase it, and a new equilibrium would emerge.

The quote is from Fortune Tellers, a historical work by Walter A. Friedman that I received as a review copy. He tries to recover the era of economic forecasting between 1900 and 1940, before the computer and before Keynesian economics.

Social Heterogeneity in Real Wages

From my latest essay.

for middle- and upper-income parents, it is a matter of taste if one chooses to spend a substantial sum to send a child to an elite preschool, or to live in a neighborhood with an elite public school, or to send a child to an elite college. Given the child’s ability, such schooling decisions make relatively little difference at the margin.

The point of the essay is that long-term calculations of “the” real wage assume homogeneity of tastes.

A DY2PVSC Post I Wish I Had Written

From someone who prefers to blog anonymously.

Economics is a science, but it is a very politicized science. The Medicaid study, with its ambiguous results, offered justification for the policy proposals of both supporters and opponents of ACA, for example. Both sides were offering an incomplete picture of the study in this debate, but both sides were also correct the claims they made even if they strategically left out inconvenient findings.

Pointer from Tyler Cowen.

Read the whole thing. He is reacting to a column by Raj Chetty, and I had a similar reaction. While proclaiming the scientific virtues of economics, Chetty was sneaking in his own biases, through a selective presentation of results.

The more important point is that we all are tempted to do this, and we need to work hard to resist such temptation. One of the reasons for my occasional DY2PVSC posts (“Did you two people visit the same country”) is to try to pair up research that supports one side with research that supports the other.

The 2014 Nobel Laureates Fama, Hansen, and Shiller

What they have in common is the “second moment.” In statistics, the first moment of a distribution is the mean, a measure of central tendency. The second moment is the variance, or spread. Politically, their views have a high second moment. If they are asked policy questions during interviews, the differences should be wide.

Shiller is known for looking at “variance bounds” for asset prices. Previously, economists had tested the efficient market hypothesis by looking at mean returns on stocks or bonds. Shiller suggested comparing the variance of stock prices with the variance of discounted dividends. Thus, the second moment. He found that the variance of stock prices was much higher than that of discounted dividends, and this led him to view stock markets as inefficient. This in turn made him a major figure in behavioral finance.

Fama was the original advocate for efficient markets. However, he was an empiricist. He verified an important implication of Shiller’s work: if stock prices vary too much, stock returns should exhibit long-run “mean reversion.” Basically, when the ratio of stock prices to a smoothed path of dividends is high, you should sell. Conversely, when the ratio is low, you should buy. Mean reversion also says something about the properties of the second moment.

Finally, Hansen is the developer of the “generalized method-of-moments” estimator. This is a technique that is most useful if you have a theory that has implications for more than one moment of the distribution. For example, Shiller’s work shows that the efficient markets hypothesis has implications for both the first and second moment (mean and variance) of stock market returns.

Although Tyler and Alex are posting about this Nobel, I think that John Cochrane is likely to offer the best coverage. As of now, Cochrane has written two posts about Fama.

In one post, Cochrane writes,

“efficient markets” became the organizing principle for 30 years of empirical work in financial economics. That empirical work taught us much about the world, and in turn affected the world deeply.

In another post, Cochrane quotes himself

empirical finance is no longer really devoted to “debating efficient markets,” any more than modern biology debates evolution. We have moved on to other things. I think of most current research as exploring the amazing variety and subtle economics of risk premiums – focusing on the “joint hypothesis” rather than the “informational efficiency” part of Gene’s 1970 essay.

Cochrane’s point that efficient market theory is to finance what evolution is to ecology is worth pondering. I do not think that all economists would agree. Would Shiller?

Some personal notes about Shiller, who I encountered a few times early in my career.

1. His variance-bounds idea was simultaneously discovered by Stephen LeRoy and Dick Porter of the Fed. The reference for their work is 1981, “The Present-value Relation: Tests Based on Implied Variance Bound,”’ Econometrica, Vol. 49, May, pp. 555-574. Some of the initial follow-up work on the topic cited LeRoy and Porter along with Shiller, but over time their contribution has been largely forgotten.

2. When Shiller’s Journal of Political Economy paper appeared (eventually his American Economic Review paper became more famous), I sent in a criticism. I argued that his variance bound was based on actual, realized dividends (or short-term interest rates, because I think that the JPE paper was on long-term bond prices) and that in fact ex ante forecasted dividends did not have such a bound. Remember, this was about 1980, and his test was showing inefficiency of bond prices because short-term interest rates in the 1970s were far, far higher than would have been implied by long-term bond prices in the late 1960s. I thought that was a swindle.

He had the JPE reject my criticism on the grounds that all I was doing was arguing that the distribution of dividends (or short-term interest rates) is unstable, and that if you use a long enough data series, that takes care of such instability. I did not agree with his view, and I still don’t, but there was nothing I could do about it.

3. When I was at Freddie Mac, we wanted to use the Case-Shiller-Weiss repeat-sales house price index as a check against fraudulent appraisals. (The index measures house price inflation in an area by looking at the prices recorded when the same house is sold in two different years.) I contacted Shiller, who referred me to Weiss. Weiss was arrogant and unpleasant during negotiations, and we gave up and decided to create our own index using the same methodology and our loan database. Weiss was so difficult, that we actually had an easier time co-operating with Fannie on pooling our data, even though they had much more data at the time because they bought more loans than we did. Eventually, our regulator took over the process of maintaining our repeat-sales price index.

4. Here is my review of Shiller’s book on the sub-prime crisis. Here is my review of Animal Spirits, which Shiller co-wrote with George Akerlof.

Finally, note that Russ Roberts had podcasts with Fama on finance and Shiller on housing.

Falkenstein on Happiness Research

He makes three interesting points. (Pointer from Jason Collins)

I note many writers I otherwise admire, usually libertarian leaning, are quite averse to the Easterlin conclusion, thinking it will lead us to adopt a luddite policies because growth would not matter in such a world

I am one of those libertarian writers who is averse to happiness research, but my aversion holds regardless of the conclusions reached. Happiness research embodies the claim that you, the researcher (I am not referring here to Falkenstein), can know more than me, the subject, about what gives me happiness. I believe that claim is false. Further, from a libertarian perspective, I believe that claim almost surely will lead you to devalue my liberty.

When an economist tells you a symmetric ovoid contains a highly significant trend via the power of statistics, don’t believe them: real effects pass the ocular test of statistical significance (ie, it should look like a pattern).

See his charts to understand his point. Putting Falkenstein’s point in more colloquial language, I would say that when the data consists of a blob of points, just because the computer can draw a line of best fit does not mean that you have demonstrated the existence of a meaningful linear relationship.

evolution favors a relative utility function as opposed to the standard absolute utility function, and the evidence for this is found in ethology, anthropology, and neurology. Economists from Adam Smith, Karl Marx, Thorstein Veblen, and even Keynes focused on status, the societal relative position, as a motivating force in individual lives

Relative Wages

Felix Salmon finds some interesting charts, from something called the National Employment Law Project.

They looked at the annual Occupational and Employment Statistics for three years — 2007, 2009 and 2012 — and created a list of wages for 785 different occupations. They then split those occupations into five quintiles, according to income; the lowest quintile made $9.49/hr, on average, last year, while the highest quintile averaged $40.23/hr.

As you go down the charts, you can see that until you get to the fourth and fifth quintiles, most jobs fall below the green lines — which means that they’re seeing their real wages fall. You can also see the commodification of low-wage jobs in the the number of occupations in the bottom two quintiles: there are just 47 occupations in the bottom quintile, while there are 186 occupations in the top quintile. (Each quintile, of course, includes the same number of total workers.)

Some remarks:

1. I would have preferred that they split the quintiles in 2007, rather than 2012. That way, you reduce the likelihood of accidental correlation between levels and growth rates. But leave that aside.

2. I would like to see employment data for the various occupations. If employment also fell in the occupations where real wages fell the most, that would suggest that what we are seeing is structural change. In fact, it would suggest that real wages did not fall enough.

3. James Tobin, in a Presidential Address to the American Economic Association over forty years ago, suggested that the Phillips Curve might be explained by downward stickiness of nominal wages when relative wages are in need of adjustment. Raising aggregate demand raises prices, and that in turn helps bring down the real wages in sectors where they otherwise would be too high.

A Grandparent Effect?

The BBC covers a study that suggests that social status depends on grandparents, not just parents.

“It may work through a number of channels including the inheritance of wealth and property, and may be aided by durable social institutions such as generation-skipping trusts, residential segregation, and other demographic processes.

Pointer from Jason Collins. He also has more.

My first thought is “mean reversion.” That is, suppose that you have two genetic types–rich and poor, call them R and P. Suppose that R and P each have children. Some of R’s children get unlucky and some of P’s children get lucky. Now the grandchildren of R still carry the R gene, so unless they are unlucky, they will revert to being rich. And conversely for the grandchildren of P. So you could observe a strong grandparent effect, based on mean-reversion and genetics alone.

But I have not read the paper.

Math Tests and Mortgage Default

The story is here and http://blogs.discovermagazine.com/d-brief/?p=1771. The claim is that mortgage borrowers with poor math skills defaulted at a much higher rate than other mortgage borrowers.

Levels of IQ and financial literacy showed no correlation with likelihood to default, but basic math skills did.

I refuse to draw the inference that the reason these folks defaulted was that they misunderstood math. First, we are talking about a sample size of 339 borrowers. That is a very small sample. Second, there are a bunch of explanatory variables that are highly correlated: credit score, IQ, and math score. That makes it much harder to separate the effect of any one of those variables. Someone else using the same data might try slightly different specifications and get very different results. Especially in such a small sample.

Do I think that low math skills are correlated with default? Absolutely. Do I believe that there is a high marginal contribution to default of low math skills, conditional on other known factors such as credit score (and IQ, if known)? Not until this sort of study is replicated in other samples using other specifications.

Manzi on the Oregon Medicaid Study

Russ Roberts draws him out. Much of the focus is on the fact that almost half of the people who won the lottery to obtain Medicaid coverage then did not apply.

What that means, Manzi suggests, is that the group of people who obtained Medicaid coverage, rather than blow it off, may have been different from the control group that lost the lottery. That is, you don’t have two groups–winners and losers. You have three groups–losers of the lottery, winners who took coverage, and winners who did not take coverage. Manzi is saying that one cannot be sure that the winners who took coverage are comparable to the losers, which makes the results difficult to interpret.

Previously, Russ interviewed Austin Frakt on the study.

The Null Hypothesis in Health Insurance

is that, in the United States, better health insurance produces no difference in health outcomes. Recently, for example, Katharine Baicker, et al, found

This randomized, controlled study [in Oregon] showed that Medicaid coverage generated no significant improvements in measured physical health outcomes in the first 2 years, but it did increase use of health care services, raise rates of diabetes detection and management, lower rates of depression, and reduce financial strain.

Pointer from, well, everyone. All I can say is that this is really separating what David Brooks calls the “detached” from the “engaged.” The latter are making an all-out effort at what I call trying to close minds on your own side.

Somewhat detached commentary includes

Tyler Cowen, Ray Fisman, and Reihan Salam.

Robin Hanson has an even stronger version of the null hypothesis. His version says that differences in health care spending produce no difference in health care outcomes. He and I disagree about how to characterize this result. Let me try to explain how we differ. Let us stipulate that:

1. Some medical procedures improve health, but not in a way that shows up in statistics. For example, if you get your broken arm fixed, you are much better off than not getting it fixed, but this will probably not show up in measured statistics of health outcomes, including longevity.

2. Some medical procedures are a waste (futile care, unwanted care, treatments of non-existent ailments, treatments that do not work, and so on).

3. Some medical procedures have an adverse effect on health.

4. Some medical procedures improve health outcomes, but only with a low probability (e.g., precautionary screening).

5. Some medical procedures definitely improve health outcomes in a measurable way.

Note also, that most studies of medical spending are not controlled experiments. In observational studies, including cross-country comparisons, the results tend to be dominated by a 6th factor, namely that health outcomes are determined much more by individual genes and behavior than by medical intervention.

Robin and I agree that (5) is true. The question becomes, how does (5) wash out in the statistics on differences in spending? His view is that there has to be enough (3) to offset the (5). My view is that it is mostly that (1), (2), and (4) serve to dilute (5). If I am correct, then researchers should find some quantitative differences in health outcomes, but these differences will not be statistically significant. Out of (bad) habit, they will report this as “no difference in outcomes.” This makes it sound as if they have proven the null hypothesis, when they have merely failed to reject it.

Of course, in a large study (as this was), there may not be much difference between failing to reject the null and proving it. The confidence interval around zero could be small (if someone has access to the paper, you can let me know).