Annual Physicals vs. Evidence

Ezekiel Emanuel writes,

Those who preach the gospel of the routine physical have to produce the data to show why these physician visits are beneficial. If they cannot, join me and make a new resolution: My medical routine won’t include an annual exam.

He cites controlled experiments showing that the Null Hypothesis is true for the routine physical exam.

Not surprising, really. Ask Robin Hanson.

Pointer from Jason Collins.

Babies and Marriage: One Pattern, Two Explanations

The WSJ reports

For every 1,000 unmarried U.S. women ages 15 to 44 in 2013, there were 44.3 births, down 2% from 2012 and 7% from 2010, CDC data show.

In contrast to unmarried women, birth rates for married women increased 1% in 2013 from 2012 to 86.9 births. In fact, they’re up 3% since 2010, after declining 5% between 2007 and 2010. (The absolute number of births among married women in 2013, 2.34 million, remained slightly below 2010’s 2.37 million.)

That piece, and this one, view this as a change in behavior, as if a constant group of married women decided to have more children, and a constant group of unmarried women decided to have less.

However, there is another possible explanation. Suppose that the two constant groups are “planners,” meaning women who only have children once they are married, and “non-planners,” meaning women who are willing to have children while unmarried. Also, suppose that among planners the rate of child-bearing is highest between years 3 through 10 of marriage. What happens if the marriage rate declines among planners because of a weak economy? Because they are unmarried and will not have children, you will see an increase in unmarried women not having children. Because the proportion of new marriages (where couples are not ready to have children) drops, you will see a bit of an uptick in married women having children. No fundamental change in behavior, just a decline in marriage rates among planners due to the recession.

I am not claiming that this is the explanation. But I need to see better quantitative analysis to rule it out.

Something’s Rotten in Happiness Research

From a book review:

Those sky-high happiness surveys, it turns out, are mostly bunk. Asking people “Are you happy?” means different things in different cultures. In Japan, for instance, answering “yes” seems like boasting, Booth points out. Whereas in Denmark, it’s considered “shameful to be unhappy,” newspaper editor Anne Knudsen says in the book.

When you ask me to report my happiness, what do I report?

1. How I feel compared to one minute ago.
2. How I feel compared to yesterday.
3. How I have been feeling on average this week with how I remember feeling some time in the past.
4. How I feel about my life as a whole compared to other people’s livs.
5. How I think other people think I am feeling.
6. How I think other people expect me to feel.

The one thing I know about my happiness is that it is reduced when people produce charts that are derived from data that lacks reliability. It is hard to get less reliable than a survey that asks a question that does not have a precise interpretation.

Pete Boettke on Ideology and Economics

He writes,

Market fundamentalism is far from the mainstream of economic thought. The mainstream folks consider their work non-ideological and merely technical because they all share the same tacit presuppositions of political economy. It would be healthy if they looked through a different window, and spent some time reading those Nobel economists I mentioned above, or the Nobel worthy economists I mentioned as well.

Read the whole thing. I had a hard time choosing an excerpt. It also could use more fleshing out, in my view.

What Boettke is wrestling with is an asymmetry between mainstream economics and those of us with a free market bent.

Here is how I would describe the asymmetry. I think that the free-market types understand the main arguments of mainstream economists, but I think that mainstream economists only seem to deal with a straw-man version of free-market economics. Keep in mind, however, the Law of Asymmetric Insight: when two people disagree, each one tends to think that he understands his opponent better than the opponent understands himself.

I think that we on the free-market side understand behavioral economics. We understand asymmetric information. We understand market failure. Thus, we differ from the straw-man version of us that mainstream economists dismiss.

On the other hand, mainstream economists appear to me not to appreciate the two most important arguments that we have. One is the socialist calculation argument. My sense is that mainstream economists either do not believe that the socialist calculation problem is real, or they believe that it only applies to socialist dictatorships. In fact, any government program to spend, tax, or regulate will encounter the socialist calculation problem. That is, government planners face a fundamental information problem themselves. Knowledge is dispersed. What planners do not know is important, and indeed it can be more important than what they claim to know about market failure.

The second argument is the public choice argument. This is often over-simplified as “government officials act based on self-interest.” The deeper issue, which Boettke mentions in his post, is that markets and government should be looked at in parallel as institutions. The market process has certain strengths and weaknesses. Government has other strengths and weaknesses. The mainstream approach simply assumes away all weaknesses of the political process. Once an economist identifies a market failure and a policy to treat it, the next step if to play fantasy despot and recommend the policy.

Finally, I have to say that this is not mere abstract philosophy. The socialist calculation problem is real. It affects financial regulators, who in the period leading up to the financial crisis used crude “risk buckets” to alter the incentives of banks. That approach was woefully information-poor, and it created huge incentives for banks to do exactly what they did with risky mortgages. See Not What They Had in Mind. The socialist calculation problem affects every agency of the government, from the FCC to the FDA to the panel of experts who is supposed to determine which medical procedures to allow.

The institutional weaknesses of government are real. Read Peter Schuck’s book. You can get the flavor of it from his talk and my comments.

My Review of Colander and Kupers

I write,

the authors seek to dethrone neoclassical economics. In terms of a metaphor that Colander articulated at a conference, neoclassical economics represents a high mountain peak in terms of insights into social phenomena. However, there is a higher peak to be found, and to reach that summit economists must first climb down from neoclassical economics and scale the peak of complexity economics.

My review attacks the authors for “their failure to stick to a single concept of government.”

Teaching is Not About Teaching

Eric Loken and Andrew Gelman wrote,

Being empirical about teaching is hard. Lack of incentives aside, we feel like we move from case study to case study as college instructors and that our teaching is a multifaceted craft difficult to decompose into discrete malleable elements.

More recommended excerpts here. Pointer from Jason Collins.

They refer to statistical quality control. Deming would describe what educators do as “tampering.” By that, he means making changes without evaluating the effect of those changes.

I think that there are two obstacles to using statistical techniques to improve teaching. One obstacle is causal density. It is not easy to run a controlled experiment, because there are so many factors that are difficult to hold constant.

But the more important obstacle may be the Null Hypothesis, which is that you are likely to find very discouraging evidence. Sometimes, I think that what the various consumers of teaching (administrators, parents, students) want is not so much evidence that your teaching methods work. What they want is a sense that you are trying. Teaching is not about teaching. It is about seeming to care about teaching.

Of course, if student motivation matters, and if students are motivated by believing that you care, then seeming to care can be an effective teaching method. I recall a few years ago reading a story of Indian children attempting distance learning, with the computer guiding the substance of their learning supplemented by elderly women acting as surrogate grandmothers, knowing nothing about the subject matter but giving students a sense that someone cared about their learning.

Who is an Influential Economist?

Tyler Cowen writes,

Let me just note that for all the talk of wonk this, data that, and Generalized Method of Moments this that and the other, every now and then the best algorithm is simply Asking Tyler Cowen.

I certainly disagree with quantitative rankings that are based on mentions in social media, a methodology that picks up controversy and obsession with Fed officials.

Let me define influence as “effect on young minds.” I think that Paul Samuelson still has the most influence. Most economic textbooks are descendants of his. Milton Friedman has great influence. Most free-market rhetoric is derivative of his.

Living economists?

Steve Levitt. Not my cup of tea, but I have encountered a number of young women who are ardent admirers, which is something I cannot say about any other economist.

Daniel Kahneman. I know many economists and non-economists who have read Thinking Fast and Slow. Not just bought it because it was famous and stopped reading after a few pages, but got through the whole book.

Paul Krugman, for better or worse. If you look in the blogosphere and op-edsphere at the ratio of uncharitable to charitable treatment of those who disagree, then you have a measure of the ratio of his influence relative to mine.

Stan Fischer, for better or worse. The Genghis Khan of macroeconomics, as I put it.

Tyler Cowen. Where would the economics blogosphere be without him?

More Never-Married Women than Men?

Pew’s George Gao writes,

the share of American adults who have never been married is at an historic high. In 2012, one-in-five adults ages 25 and older had never been married. Men are more likely than women to have never been married. And this gender gap has widened since 1960.

This puzzles me a bit. Suppose you have a population with 100 men and 100 women, and that all marriages are heterosexual. The data say that 23 men have never been married and 17 women have never been married. How can that happen? At any given time, the same number of men and women must be married. The only way I can make the arithmetic work is to assume that some of the now-married women are married to men who are married for the second time (and thus the men are not removed from the never-married total), and that some of the formerly-married women are now single, and hence are not included in the never-married total. Apparently, men have an easier time re-marrying.

More interesting data points on a variety of topics at the link.

Another puzzling one:

Brazil and Mexico, which now have a younger population than the U.S., will potentially have an older one than the U.S. by the middle of this century.

I guess that this is a result of our baby boom generation? As we Boomers die off, the age of our population will grow more slowly than that of countries that did not have such large baby booms. Is that the story?

What I’m Reading

Vintage Bill James.

Given an option to do so, all men prefer to reject information. We start out in life bombarded by a confusing, unfathomable deluge of signals, and we continue until our deaths to huddle under that deluge, never learning to make sense of more than a tiny fraction of it. We get in an elevator and we punch a button and the elevator starts making a noise, and we have no idea in the world of why it makes that noise or how it lifts us up into the air, and so we learn in time to pay no attention to it.

As we prefer to reject any information that complicates our understanding of the world, we especially prefer to reject information about things that happen outside of our own view. If you simply decide that [data that you lack the energy to process] are meaningless, then you don’t have to worry about trying to figure out what they mean. The world is that much simpler.

Bill James is, of course, a famous baseball quant. He was not really the first–I would give that honor to Earnshaw Cook. But James was a dogged empiricist, always questioning and refining his own methods. Instead of manipulating data to support his opinion, he manipulated data in order to arrive at reliable answers. In that respect, I think he sets a great example for economists, which too few emulate.

But the reason I am reading vintage James is because the man could write. There are now many baseball quants, and some of them may have even more baseball-statistics knowledge than James, but they are not worth reading for pleasure.

The quoted passage is from the Bill James Baseball Abstract for 1985.

David Beckworth on Productivity Measurement

He writes,

Has productivity growth in consumption really been flat since the early 1970s? No meaningful gains at all? This does not pass the smell test, yet this is one of the best TFP measures. This suggest there are big measurement problems in consumption production. And I suspect they can be traced to the service sector. I suspect if these measurement problems were fixed there would be less support for secular stagnation (and maybe for the Great Stagnation view too).

Actually, he wrote that some time ago, and he then quoted himself.

Put it this way. We do not have reliable measures of real GDP. Tell me how to measure output in health care, education, financial services, etc.

We do not have reliable measures of labor input. Tell me how to measure human capital. Tell me how to distinguish labor used in production from labor used to build organizational capital.

Labor productivity is the ratio of these two unmeasurables. Labor productivity growth is the percent change in those two unmeasurables. Economist John Fernald runs statistical algorithms on the moving average of this percent change in order to arrive at “breaks” in productivity trends. He makes the point (like Beckworth, I attended this conference) that measurement error ought to behave smoothly, so that the broken trends that he fits to the data should be indicating real change. But at that same conference, Steve Oliner showed that a measure of productivity in the computer industry shows a decline because the government statisticians were using an approach to tracking prices that may have been accurate in 2001 but greatly under-estimated price declines (and hence under-estimated productivity) by the end of that decade. Steve argued for humility in any claim to measure or forecast productivity trends, and I think that is an important take-away from the conference.