Babies and Marriage: One Pattern, Two Explanations

The WSJ reports

For every 1,000 unmarried U.S. women ages 15 to 44 in 2013, there were 44.3 births, down 2% from 2012 and 7% from 2010, CDC data show.

In contrast to unmarried women, birth rates for married women increased 1% in 2013 from 2012 to 86.9 births. In fact, they’re up 3% since 2010, after declining 5% between 2007 and 2010. (The absolute number of births among married women in 2013, 2.34 million, remained slightly below 2010’s 2.37 million.)

That piece, and this one, view this as a change in behavior, as if a constant group of married women decided to have more children, and a constant group of unmarried women decided to have less.

However, there is another possible explanation. Suppose that the two constant groups are “planners,” meaning women who only have children once they are married, and “non-planners,” meaning women who are willing to have children while unmarried. Also, suppose that among planners the rate of child-bearing is highest between years 3 through 10 of marriage. What happens if the marriage rate declines among planners because of a weak economy? Because they are unmarried and will not have children, you will see an increase in unmarried women not having children. Because the proportion of new marriages (where couples are not ready to have children) drops, you will see a bit of an uptick in married women having children. No fundamental change in behavior, just a decline in marriage rates among planners due to the recession.

I am not claiming that this is the explanation. But I need to see better quantitative analysis to rule it out.

Something’s Rotten in Happiness Research

From a book review:

Those sky-high happiness surveys, it turns out, are mostly bunk. Asking people “Are you happy?” means different things in different cultures. In Japan, for instance, answering “yes” seems like boasting, Booth points out. Whereas in Denmark, it’s considered “shameful to be unhappy,” newspaper editor Anne Knudsen says in the book.

When you ask me to report my happiness, what do I report?

1. How I feel compared to one minute ago.
2. How I feel compared to yesterday.
3. How I have been feeling on average this week with how I remember feeling some time in the past.
4. How I feel about my life as a whole compared to other people’s livs.
5. How I think other people think I am feeling.
6. How I think other people expect me to feel.

The one thing I know about my happiness is that it is reduced when people produce charts that are derived from data that lacks reliability. It is hard to get less reliable than a survey that asks a question that does not have a precise interpretation.

Teaching is Not About Teaching

Eric Loken and Andrew Gelman wrote,

Being empirical about teaching is hard. Lack of incentives aside, we feel like we move from case study to case study as college instructors and that our teaching is a multifaceted craft difficult to decompose into discrete malleable elements.

More recommended excerpts here. Pointer from Jason Collins.

They refer to statistical quality control. Deming would describe what educators do as “tampering.” By that, he means making changes without evaluating the effect of those changes.

I think that there are two obstacles to using statistical techniques to improve teaching. One obstacle is causal density. It is not easy to run a controlled experiment, because there are so many factors that are difficult to hold constant.

But the more important obstacle may be the Null Hypothesis, which is that you are likely to find very discouraging evidence. Sometimes, I think that what the various consumers of teaching (administrators, parents, students) want is not so much evidence that your teaching methods work. What they want is a sense that you are trying. Teaching is not about teaching. It is about seeming to care about teaching.

Of course, if student motivation matters, and if students are motivated by believing that you care, then seeming to care can be an effective teaching method. I recall a few years ago reading a story of Indian children attempting distance learning, with the computer guiding the substance of their learning supplemented by elderly women acting as surrogate grandmothers, knowing nothing about the subject matter but giving students a sense that someone cared about their learning.

More Never-Married Women than Men?

Pew’s George Gao writes,

the share of American adults who have never been married is at an historic high. In 2012, one-in-five adults ages 25 and older had never been married. Men are more likely than women to have never been married. And this gender gap has widened since 1960.

This puzzles me a bit. Suppose you have a population with 100 men and 100 women, and that all marriages are heterosexual. The data say that 23 men have never been married and 17 women have never been married. How can that happen? At any given time, the same number of men and women must be married. The only way I can make the arithmetic work is to assume that some of the now-married women are married to men who are married for the second time (and thus the men are not removed from the never-married total), and that some of the formerly-married women are now single, and hence are not included in the never-married total. Apparently, men have an easier time re-marrying.

More interesting data points on a variety of topics at the link.

Another puzzling one:

Brazil and Mexico, which now have a younger population than the U.S., will potentially have an older one than the U.S. by the middle of this century.

I guess that this is a result of our baby boom generation? As we Boomers die off, the age of our population will grow more slowly than that of countries that did not have such large baby booms. Is that the story?

What I’m Reading

Vintage Bill James.

Given an option to do so, all men prefer to reject information. We start out in life bombarded by a confusing, unfathomable deluge of signals, and we continue until our deaths to huddle under that deluge, never learning to make sense of more than a tiny fraction of it. We get in an elevator and we punch a button and the elevator starts making a noise, and we have no idea in the world of why it makes that noise or how it lifts us up into the air, and so we learn in time to pay no attention to it.

As we prefer to reject any information that complicates our understanding of the world, we especially prefer to reject information about things that happen outside of our own view. If you simply decide that [data that you lack the energy to process] are meaningless, then you don’t have to worry about trying to figure out what they mean. The world is that much simpler.

Bill James is, of course, a famous baseball quant. He was not really the first–I would give that honor to Earnshaw Cook. But James was a dogged empiricist, always questioning and refining his own methods. Instead of manipulating data to support his opinion, he manipulated data in order to arrive at reliable answers. In that respect, I think he sets a great example for economists, which too few emulate.

But the reason I am reading vintage James is because the man could write. There are now many baseball quants, and some of them may have even more baseball-statistics knowledge than James, but they are not worth reading for pleasure.

The quoted passage is from the Bill James Baseball Abstract for 1985.

David Beckworth on Productivity Measurement

He writes,

Has productivity growth in consumption really been flat since the early 1970s? No meaningful gains at all? This does not pass the smell test, yet this is one of the best TFP measures. This suggest there are big measurement problems in consumption production. And I suspect they can be traced to the service sector. I suspect if these measurement problems were fixed there would be less support for secular stagnation (and maybe for the Great Stagnation view too).

Actually, he wrote that some time ago, and he then quoted himself.

Put it this way. We do not have reliable measures of real GDP. Tell me how to measure output in health care, education, financial services, etc.

We do not have reliable measures of labor input. Tell me how to measure human capital. Tell me how to distinguish labor used in production from labor used to build organizational capital.

Labor productivity is the ratio of these two unmeasurables. Labor productivity growth is the percent change in those two unmeasurables. Economist John Fernald runs statistical algorithms on the moving average of this percent change in order to arrive at “breaks” in productivity trends. He makes the point (like Beckworth, I attended this conference) that measurement error ought to behave smoothly, so that the broken trends that he fits to the data should be indicating real change. But at that same conference, Steve Oliner showed that a measure of productivity in the computer industry shows a decline because the government statisticians were using an approach to tracking prices that may have been accurate in 2001 but greatly under-estimated price declines (and hence under-estimated productivity) by the end of that decade. Steve argued for humility in any claim to measure or forecast productivity trends, and I think that is an important take-away from the conference.

Macroeconomics is Infinitely Confirmable

John Cochrane writes,

Keynsesians, and Krugman especially, said the sequester would cause a new recession and even air traffic control snafus. Instead, the sequester, though sharply reducing government spending, along with the end of 99 week unemployment insurance, coincided with increased growth and a big surprise decline in unemployment.

Sometimes, I think that there are macroeconomists (Krugman is not the only one) for whom there is no path of economic variables that could ever contradict their point of view. They remind me of the climate scientists who tell us that Buffalo’s Snowvember came from global warming.

Macroeconomics is infinitely confirmable because of its high causal density and lack of controlled experiments. The macroeconomist has enough interpretative degrees of freedom to twist any pattern of economic activity to fit his or her priors.

In theory, you could ask macroeconomists to place bets on their predictions. However, that, too, would run afoul of causal density. If you make unconditional predictions, then an oil shock or other event could make you right or wrong more or less by accident. And the conditional forecasting space gets very complicated very quickly.

Undebunkable

Jesse Rothstein writes,

Like all quasi-experiments, this one relies on an assumption that the treatment – here, teacher switching – is as good as random. I find that it is not: Teacher switching is correlated with changes in students’ prior-year scores

Pointer from Tyler Cowen.

Thus, Rothstein hopes to debunk a famous paper by Raj Chetty and others which claimed to shows that great teachers add a lot of value. My guess is that Rothstein will fail. It reminds me of when Bill Wascher and others debunked the Krueger-Card paper claiming to show that higher minimum wage laws do not reduce employment. Once a result is put into the literature by a high-status economist and the result supports progressive policy preferences, it becomes undebunkable.

And you’re right, I’m not being charitable to those who disagree.

On Science and Policy

Pascal-Emmanuel Gobry writes,

Because people don’t understand that science is built on experimentation, they don’t understand that studies in fields like psychology almost never prove anything, since only replicated experiment proves something and, humans being a very diverse lot, it is very hard to replicate any psychological experiment. This is how you get articles with headlines saying “Study Proves X” one day and “Study Proves the Opposite of X” the next day, each illustrated with stock photography of someone in a lab coat. That gets a lot of people to think that “science” isn’t all that it’s cracked up to be, since so many studies seem to contradict each other.

This is how you get people asserting that “science” commands this or that public policy decision, even though with very few exceptions, almost none of the policy options we as a polity have have been tested through experiment (or can be).

I agree with this. I think it applies to macroeconomics and also to climate “science.”

Note that the origins of the progressive movement were based on the exact opposite view, which is that public policy could and should be based on something called social science.

Read the whole thing, so that you can reach these sentences:

the reason it took us so long to invent it and the reason we still haven’t quite understood what it is 500 years later is it is very hard to be scientific. Not because science is “expensive” but because it requires a fundamental epistemic humility, and humility is the hardest thing to wring out of the bombastic animals we are.

What We Know About Health Care Waste Isn’t True?

Louise Shiner writes,

geographic variation in health spending does not provide a useful way to examine the inefficiencies of our health system. States where Medicare spending is high are very different in multiple dimensions from states where Medicare spending is low, and thus it is difficult to isolate the effects of differences in health spending intensity from the effects of the differences in the underlying state characteristics. I show, for example, that previous findings about the relationships between health spending, the share of physicians who are general practitioners, and quality, are likely the result of omitted factors rather than the result of causal relationships

Russ Roberts often asks whether any empirical work in economics changes one’s mind. I would say that the Dartmouth studies changed my mind about health care spending in the U.S., convincing me that much of it is “wasted” (I prefer “spent on procedures with high costs and low benefits”). However, there have always been those who doubted the validity of those studies, and this appears to be a particularly strong critique.

On the other hand, see Austin Frakt’s overview of the literature.