The decline of labor’s share

Germán Gutiérrez and Sophie Pitony write,

non-housing labor shares have remained broadly stable since 1970 for all advanced economies but the US. This is our main result. . . housing explains all of the decline in European total economy labor shares. The US NFC labor share is largely unaffected by housing or self-employment, so it still exhibits a sharp decline particularly after 2000

Owen Zidar and others write,

Private business profit falls by three-quarters after owner retirement or premature death. Classifying three-quarters of private business profit as human capital income, we find that most top earners are working rich: they derive most of their income from human capital, not physical or financial capital. The human capital income of private business owners exceeds top wage income and top public equity income. Growth in private business profit is explained by both rising productivity and a rising share of value added accruing to owners.

Pointers from Tyler Cowen and David Henderson, respectively.

The attempt to divide all income between labor and capital is a fool’s errand. As I put it,

economists still inhabit the world of the 19th century, in which hordes of interchangeable workers in stark factories toil in the service of the owners of capital

Intangible factors matter more and more in today’s economy. You can choose to label the income that is derived from intangible factors “capital income,” in which case the “labor share” of income is declining. Or you can try to “correct” this by justifying labeling some of the intangible income as “labor” income. But what you really should be doing is abandoning the project of trying to view a modern economy through the lens of an aggregate production function f(K,L). It’s a really popular pastime, but it’s a crock.

Road to Sociology watch

Dylan Matthews writes that tomorrow belongs to Raj Chetty.

Chetty has made his name as an empirical economist, working with a small army of colleagues and research assistants to try to get real-world findings with relevance to major political questions. And he’s focused on the roots and consequences of economic and racial inequality. He used huge amounts of IRS tax data to map inequality of opportunity in the US down to the neighborhood, and to show that black boys in particular enjoy less upward mobility than white boys.

Ec 1152 is an introduction to that kind of economics.

Pointer from Tyler Cowen.

I have been saying for quite a while that economics is on the road to sociology. I first made that case two years ago.

Economists will need to see economic decisions as embedded in cultural circumstances. In order to understand economic phenomena, we will have to pay attention to the role of beliefs and social norms.

. . .There is a very real possibility that over the next 20 years academic economics will congeal into a discipline, like sociology today, which is definitively shaped by an ideologically driven point of view.

I have mixed feelings about seeing the new approach to economic education. The pluses, which were alluded to in my essay, include:

1. Recognition of the importance of cultural factors.

2. Getting away from thinking in terms of optimization problems.

3. In empirical work, recognizing the problems created by what Edward Leamer called specification searches.

The minuses include:

1. Traditional economics emphasizes that outcomes do not come from intentions. The supply-and-demand model is not one in which one individual or group controls the outcome. Students are thought to think in systemic terms, rather than personal terms. That is useful (a) because it provides valuable insights into the economy and (b) because it is good for people to practice thinking in abstract, systemic terms rather than only in concrete terms. I think that not giving students the systemic perspective is a loss.

2. The research can be, and often is, oriented toward filling in the oppressor-oppressed framework. That is the ideological trap that concerned me in my essay. We also need to be able to step outside of the oppressor-oppressed framework and examine it critically, and I fear that this examination will not take place.

3. The newer research methods are not without their own weaknesses. They are subject to replication failures and narrow applicability. Data can be of questionable validity. Interpretations of results can be misleading.

I think that economic education can arrive at something better than neoclassical economics. But the road to sociology may not be the way to get there.

The genes that did not matter

For predicting depression. The authors of this study report

The implication of our study, therefore, is that previous positive main effect or interaction effect findings for these 18 candidate genes with respect to depression were false positives. Our results mirror those of well-powered investigations of candidate gene hypotheses for other complex traits, including those of schizophrenia and white matter microstructure.

Read Scott Alexander’s narrative about their findings.

As I understand it, a bunch of old studies looked at one gene at a time in moderate samples and found significant effects. This study looks at many genes at the same time in very large samples and finds that no one gene has significant effects.

The results are not reported in a way that I can clearly see what is happening, so the following is speculative:

1. It is possible that the prior reports of a significant association of a particular gene with greater incidence of depression are due to specification searches (trying out different “control” variables until you find a set that produces “significant” results).

2. It is possible that publication bias meant that although many attempts by other researchers to find “significant” results failed, those efforts were not reported.

3. These authors use a different, larger data sample, and perhaps in that sample the incidence of depression could be measured with greater error than in the smaller samples used by previous investigators. Having a larger data sample increases your chance of finding “significant” results, but measurement error reduces your chances of finding “significant” results. The authors are aware of the measurement-error issue and they conduct an exercise intended to show that this could not be the main source of their failure to replicate other studies.

4. If I understand it correctly, previous studies each tended to focus on a small number of genes, perhaps just one. This study includes many genes at once. If my understanding is correct, then in this new study the authors are now controlling for many more factors.

Think of it this way. Suppose you do a study of cancer incidence, and you find that growing up in a poor neighborhood is associated with a higher cancer death rate. Then somebody comes along and does a study that includes all of the factors that could affect cancer incidence. This study finds that growing up in a poor neighborhood has no effect. A reason that this could happen is that once you control for, say, propensity to smoke, the neighborhood effect disappears.

In the case of depression, suppose that the true causal process is for 100 genes to influence depression together. A polygenic score explains, say, 20 percent of the variation in the incidence of depression across a population. Now you go back to an old study that just looks at one gene that happens to be relatively highly correlated with the polygenic score.

In finance, we say that a stock whose movements are highly correlated with those of the overall market is a high-beta stock. The fact that XYZ corporation’s share price is highly correlated with the S&P 500 does not mean that XYZ’s shares are what is causing the S&P to move. Similarly, a “high-beta” gene for depression would not signify causality, if instead a broad index of genes is what contributes to the underlying causal process.

Further comments:

(1) and (2) are fairly standard explanations for a failure to replicate. But Alexander points out that in this case it is not just one or two studies that fail to replicate, but hundreds. That would make this a very, very sobering example.

If (3) is the explanation (i.e., more measurement error in the new study), then the older studies may have merit. It is the new study that is misleading.

If (4) is the explanation, then the “true” model of genes and depression is closer to a polygenic model. The single-gene results reflect correlation with other genes that influence the incidence of depression rather than direct causal effects.

If (4) is correct, then the “new” approach to genetic research, using large samples and looking at many genes at once, should be able to yield better predictions of the incidence of depression than the “old” single-gene, small-sample approach. But neither approach will yield useful information for treatment. The old approach gets you correlation without causation. The new approach results in a causal model that is too complex to be useful for treatment, because too many genes are involved and no one gene suggests any target for intervention.

I thank Russ Roberts for a discussion last week over lunch, without implicating him in any errors in my analysis.

Russ Roberts on non-stagnation

He has a 7-minute video lesson and a companion essay.

What the snapshots show is that the rich today are richer than the rich of yesterday. If the rich people are the same people as yesterday, than one’s class determines one’s fate. But if they are not the same people, the snapshots tell you that the dispersion of income has increased. That may or may not bother you, but it doesn’t necessarily mean that there is a distinct group called “the rich” who are capturing all the gains while the rest of us tread water.

The mis-reading of snapshots is one of my pet peeves. A snapshot means looking at, say the average income of someone in the 90th percentile in 1980 and comparing it with someone in the 90th percentile in 2010. The mis-reading of snapshots is to treat the two as if they were the same person.

If you follow actual people from 1980 to 2010, the average increase in income for people in the bottom 20 percent in 1980 is actually quite high. The thing is, many of those people no longer show up in the bottom 20 percent! Instead, the bottom 20 percent in 2010 is occupied by a new set of people, including young families, retired people no longer earning incomes, new immigrants, and people who have recently lost jobs. The snapshots can show stagnation at the bottom 20 percent, even though real people in the bottom 20 percent in 1980 did not stagnate.

I would like to see a high-profile debate on what the data show about trends in income distribution. Otherwise, I fear that those of us with a powerful case against the conventional wisdom will be ignored.

Polygenic score for obesity. . .and?

Coverage of a recent study.

The adults with the highest risk scores weighed on average 13 kilograms more than those with the lowest scores, and they were 25 times as likely to be severely obese, or more than 45 kilograms overweight. “What’s striking is not just the weight,” says Sekar Kathiresan, a cardiologist and geneticist at Massachusetts General Hospital in Boston and the Broad Institute in Cambridge, Massachusetts, who led the study. “If you have a high risk score for obesity, you’re at high risk for heart attack, stroke, diabetes, hypertension, heart failure, and blood clots in the legs.”

And what else? The polygenic score is a result of a statistical fishing expedition. We do not know whether the genes in the score govern physical characteristics, such as metabolism and food preferences, or whether they affect psychological traits, such as conscientiousness. I would be willing to bet that a lot of it is the latter.

If my intuition is correct, then the “obesity score” would predict a lot of other behavioral traits as well. Propensity for getting into financial difficulty. Grades in school. etc.

Question from a reader

He writes,

I have not been able to find a causal account as to why information failures (particularly with regards to quality) lead to market failures

In textbook economics, a market failure is when the private incentives lead to either too little or too much of a good being produced.

In terms of information, consumers make purchases based on what they can observe. If what they observe is highly correlated with quality, they should do well. But not necessarily otherwise.

Consider a high school student making a college visit. The appearance of the facilities can be observed relatively accurately, but it is not very highly correlated with quality. The quality of classroom instruction cannot be observed so accurately, because the high school student will not sit in on very many classes. But suppose that the quality of classroom instruction is highly correlated with the value that the student gets out of college.

We can predict that colleges will over-spend on the appearance of facilities, because that factors heavily into the decision of the high school student. We can predict that colleges will under-spend on classroom instruction. Market failure.

The public policy response should be to tax college facilities and/or subsidize quality classroom instruction.

I am not offering this as a realistic picture of a market failure in the market for higher education. My point is to answer the reader’s question about connecting information failure to textbook market failure.

Philosophy and economics

Diane Coyle writes,

Yesterday, an undergraduate emailed me to ask for book recommendations about the overlap between economics and philosophy. I recommended:

Amartya Sen The Idea of Justice
Michael Sandel What Money Can’t Buy: The Moral Limits of Markets
Agnar Sandmo Economics Evolving
and
D M Hausman and M S McPherson and D Satz Economic analysis, moral philosophy, and public policy
Then I asked Twitter, and here is the resulting, much longer, list. [snipped]

Pointer from Tyler Cowen.

I have not read any of these. I have read some on the longer list. Thinking of the most lively reads, and trying to include left, right, and center, I would recommend:

The Worldly Philosophers, by Robert Heilbroner.
Radicals for Capitalism, by Brian Doherty.
Capitalism and the Jews, by Jerry Muller.

If I were teaching an undergraduate course in philosophy and economics, I would include as articles

Hayek’s “The Pretense of Knowledge”
McCloskey’s “Why I am no longer a Positivist”
Leamer’s “Let’s take the Con out of Econometrics”
my own “How Effective is Economic Theory?”

In my view, there are two issues at the center of the overlap between economics and philosophy.

1. What methods best serve economics? In particular, what are the pros and cons of treating economics as a science?

2. How do markets fit in to the moral universe? What problems do they address? What problems do they cause?

The essays on my list deal primarily with the epistemological issue. The books on my list deal mostly with the moral issue.

Supposedly clever statistical analysis

Russ Roberts writes,

It would be tempting to say that this is just a working paper. Perhaps it will get no traction. But I doubt it. The Becker-Friedman Institute will spread it around — I only knew about the study because the Institute sent me an email. The media will be eager to repeat the finding because people have strong feelings about Uber and Lyft: “U of Chicago Study Finds Ridesharing Kills 1000 People Each Year.” Taxicab owners and their supporters will cite it.

The fact is that economists are almost always doing observational studies, not experiments. At the very least, economists should make more use of the Hill Criteria.

  • Strength (effect size): A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal.
  • Consistency (reproducibility): Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect.
  • Specificity: Causation is likely if there is a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship.
  • Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay).
  • Biological [or economic] gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence.
  • Plausibility: A plausible mechanism between cause and effect is helpful (but Hill noted that knowledge of the mechanism is limited by current knowledge).
  • Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, Hill noted that “… lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations”.
  • Experiment: “Occasionally it is possible to appeal to experimental evidence”.
  • Analogy: The effect of similar factors may be considered.
  • Some authors consider also, the Reversibility: If the cause is deleted then the effect should disappear as well

Many of Russ’ criticisms of the paper can be mapped back to some of these criteria.

Question from a commenter: carbon tax

A couple of weeks ago, he wrote,

I have a question on the carbon tax issue.

Assume for the moment that the tax imposed accurately reflected the social cost. By the standard theory of Pigouvian taxes, do we actually care whether emissions go down? As long as everyone incorporates the social costs in to their decisions, we’ve internalized the externality, yes?

If demand isn’t very elastic in the range of the tax, then no reduction in emissions (or too low to measure) is the “correct” result, right?

1. I suspect that the main reason a carbon tax tends to have no effect in practice is that it takes a lot of political will and bureaucratic effort to make it really bite. It’s difficult to avoid grandfathering and other concessions.

2. If you drink the climate-change Kool-Aid, then the social cost of carbon emissions is ginormous. If you also believe that demand is inelastic, then you either have to implement quantity rationing (there is a classic paper by Martin Weitzman on “prices vs. quantities” that would justify this) or go for a very high tax.

3. If you believe that demand is inelastic for a very wide range of price+tax, then it suggests that the social cost of reducing carbon emission is ginormous. Then the policy issue becomes a kind of irresistable force vs. immovable object situation.

A sex survey: what’s not to love?

The story is behind a WaPo paywall.

1. It is not a story about sexual frequency. It is about the incidence of people who have not had sex with a partner in the past year. Call these folks abstainers. Sorry, Tyler, but I disagree with Christopher Ingraham that it is amazing that there are more abstainers in the 18-30 age bracket than among fifty-somethings. My guess is that the proportion of married people is quite a bit higher among 50-somethings, and if you’re looking for abstainers, you are more likely to find them among people who are not married. To be blunt, the survey does not say that older folks are having more sex. It just says that fewer of them are abstaining for a year.

2. Robin Hanson also could not resist commenting.

it won’t at all do to point to effects that are constant in time, such as people not always telling the truth in polls, or men having lower standards for sex partners. It also won’t do to point to changes over this time period that effected [sic] all ages and genders similarly, such as obesity, porn, video games, social media, dating apps, and wariness re harassment claims. They might be part of an answer, but can’t explain all by themselves. To explain an unusual burst over the last decade, it is also problematic to point to factors (e.g., computing power) that changed over the last decade, but changed just as much over prior decades.

3. Here’s a way to simplify the data in one of the graphs on Robin’s post, which looks at people in the 18-30 age bracket. Suppose we had 100 heterosexual men and 100 heterosexual women. Ten years ago, there were 10 abstainers of each gender. Among the more recent cohort,there are 28 male abstainers and 18 female abstainers.

4. Here’s a way to think about this. Ten years ago, there were 10 female abstainers, each with a “partner” who abstained also. In the more recent cohort, the number of abstainer “partnerships” increased by 8. Some of that could be a decrease in marriage rate, but how much could the marriage rate of have fallen in the last decade?

5. Another interesting development is that there are now 10 male abstainers who don’t have a “partner.” To put it another way, there are now 82 women who did not abstain and only 72 men who did not abstain. (Of course, ten years ago, there were 90 non-abstainers of each gender, so definitely don’t think of this as women getting friskier.) Who did these extra ten women find? Older men? Men who already had non-abstained with someone else?

6. Robin writes,

it seems that. . .the latest age cohort has switched to a new sex culture wherein the less desirable half of young men are now seen as even less desirable by young women than previous cohorts would have seen them. And within this culture it is seen as more acceptable for young women to share the more desirable half of young men

I agree that this is likely the basic story, but I would not overstate it. It could be that we should be talking about the less desirable quarter of the male population. And the number of women who are ok with sharing desirable men may still be very small. My arithmetic exercise suggests that the proportion of women who are sharing (in the sense that they have a partner who in the past year has had additional partners) is 10 percent, and a lot of that may not be sharing by choice.