Genes and cognitive ability

Nicholas W. Papageorge and Kevin Thom write,

we utilize a polygenic score (a weighted sum of individual genetic markers) constructed with the results from Okbay et al. (2016) to predict educational attainment. The markers most heavily weighted in this index are implicated in neuronal development and other biological processes that affect brain tissue. We interpret the polygenic score as a measure of one type of endowed ability.

Perhaps a newer version of the paper is here.

The paper finds that gene-environment interaction matters. But I think it is important that we now have a genetic score that can serve as a proxy for IQ. Also, this genetic score affects economic outcomes even when educational attainment is controlled for.

By the way, Robert Plomin’s forthcoming book is on my radar. This review points out the obvious, which is that the book will not be well received.

And also, Tyler Cowen points to this paper, which says that it is liberals who attribute outcomes more to genetic factors.

I can only imagine genetic effects being powerful if you hold constant the cultural context. Suppose it were possible to create reliable polygenic scores for the Big Five personality traits, plus cognitive ability. I can imagine that these scores would be useful in predicting outcomes among a group of American teenagers. But if you were to take a random sample of teenagers around the world and use nothing but these scores to predict long-term outcomes, I cannot imagine that this would work. To carry the thought experiment even further, think in terms of plopping people with identical polygenic scores into different centuries.

Notes on Nordhaus and Romer

They are the newest Nobel laureates in economics.

1. They had very different career trajectories. Nordhaus, who is 13 years older, started out as a mediocre empirical macroeconomist, known for working on “the investment function.” His creativity emerged much later in his career. Those of us who are on the heterodox right tend to praise most his papers showing the tremendous drop in the cost of light over the centuries and the low percentage of value of innovation captured by innovators. But he will end up best known for his combination macro-econometric/climate model, which to me is multiplying two instances of faux science together.

Romer produced his most important research much earlier in his career. He detoured into creating Aplia, one of the first computer-based tools for economics teaching. He also detoured into a charter cities project, which fell apart amidst what I call corporate soap opera. He did a brief stint (although longer than I would have predicted) as chief economist for the World Bank.

2. David Warsh, in Knowledge and the Wealth of Nations, focused on Romer and Krugman. Warsh saw them as likely Nobel laureates, and he has now been proven correct.

3. Nick Schulz interviewed Romer for our book From Poverty to Prosperity (re-issued as Invisible Wealth. It was one of the best of the interviews.

4. I have never encountered Nordhaus. To me, Romer comes across as prickly, if not outright bitter. He and I have clashed in writing a few times in recent years. Just a week ago, I disagreed with him. I think of him as sharing Krugman’s tendency to impugn the motives of those with whom he disagrees.

Gene Epstein on Joseph Stiglitz

He writes,

Other Stiglitz critics see stubbornness as a key flaw. “There are many things wrong with Stiglitz as a policy economist,” says economist​ Jagdish Bhagwati, also a University Professor at Columbia. “One is that he doesn’t learn from his mistakes. A New York Times story once quoted me as saying his ‘Initiative for Policy Dialogue’ should more accurately be called ‘Initiative for Policy Monologue.’ ” An even harsher judgment comes from another Columbia colleague, who spoke on condition of anonymity: “Joe’s career tragically demonstrates that if one combines legitimate credentials as a clever and creative theorist with extreme left-wing bias and a colossal ignorance of history, one can accomplish a great deal of harm in the world.”

The article includes all of the the major examples where Stiglitz went wrong that I know of, except that it omits discussing Occupy Wall Street. Stiglitz wanted to position himself as some sort of chief economist for that movement.

It is easy to look for similarities between Stiglitz and Krugman. But I see their personalities as different, in fact nearly opposite. One senses that beneath Krugman’s relentless attacks on others he has deep needs for reassurance. In contrast, Stiglitz comes across as having no inner self-doubts. In that sense, he reminds me of David Halberstam’s description of Walt Rostow as feeling unthreatened by Vietnam War critics, because Rostow was so confident that we were winning.

Relative to Krugman, Stiglitz was more prolific and important as an economic researcher. I admire a lot of Stiglitz’s work. With Krugman, there is his early work on economies of scale and trade, but not much else. He gets a lot of support for his liquidity-trap stuff, but not from me.

One may hope that, years from now, Stiglitz’s role as a public intellectual will be forgotten. But with Krugman, that is by far the most significant aspect of his career.

Scott Alexander on causal density

He calls it the omnigenic model.

the sciences where progress is hard are the ones that have what seem like an unfair number of tiny interacting causes that determine everything. We should go from trying to discover “the” cause, to trying to find which factors we need to create the best polycausal model. And we should go from seeking a flash of genius that helps sweep away the complexity, to figuring out how to manage complexity that cannot be swept away.

I prefer the term “causal density,” which James Manzi introduced in Uncontrolled. Many economic phenomena are characterized by causal density. Unfortunately, the mainstream approach is to “sweep away the complexity” by coming up with the simplest possible model that might explain some phenomenon.

Adult marshmallow-test winners do better

William H. Hampton1\, Nima Asadi and Ingrid R. Olson write.

Participants engaged in a delay discounting task adapted from O’Brien et al. (2011). In the task, participants were asked to make choices between a smaller sum of money offered now versus a larger sum of money (always $1,000) offered at five different delays.

They then use this variable along with other variables to predict the person’s income.

The results of each model were quite consistent, with occupation and education paramount in each case. On average, the next most important factors were zip code group and gender. While zip code group was highly associated with income, it is worth noting that our data do not adjudicate directionality. Logically, a person’s income is more likely a determinant of where they live than vice versa. Nonetheless, zip codes are a useful proxy for socioeconomic status, which is also related to income (Winkleby et al., 1992). As our zip codes were binned by average income, the association between zip code and income is not surprising, but does suggest that the individuals in our sample had incomes roughly representative of the incomes from their respective zip code group. Regarding gender, we found that males earned more money than females, a result consistent with a corpus of research on the gender wage gap (Nadler et al., 2016). The fifth most important variable was delay discounting, a factor closely related, but distinct from impulsivity. Although previous research had indicated that discounting was related to income (Green et al., 1996), it was unclear to what extent, relative to other factors, this variable mattered. Interestingly, delay discounting was more predictive than age, race, ethnicity, and height

Pointer from Tyler Cowen.

Oy. It would be nice to be able to cite their comment that “delay discounting was more predictive than age, race, ethnicity, and height.” But the flaws I perceive in the study are just too fatal to allow me to do that.

1. Most of the variables that they use to “predict” income are not plausibly exogenous to income. For that matter, it is possible that your level of income helps determine your willingness to delay receiving money, so even their key delay-discounting variable is plausibly endogenous.

2. When you compare the strength of different predictors (hardly ever a valid exercise), measurement error is everything. A variable that is measured unambiguously will do much better than a variable that is measured subject to errors, even if the latter variable has more influence in reality. So gender has the advantage of being unambiguous*, while self-reported ethnicity can be ambiguous.

*all right, some people insist that gender is ambiguous, but I don’t think those people find their way to this blog.

Influential books

A reader asks,

I would love to see your personal list of the top most influential books of the past 10 years (or so).

I have to approach this by working backwards: How has my thinking changed in the past ten years or so? Who influenced those changes? What books did they write?

The most important change is that I think of economics as embedded in culture. I note that culture evolves rapidly, at least in comparison with biological evolution. Economics really ought to be tied in with sociology, except that sociologists are so fixated on the oppression story.

People who have influenced me along these lines include Joseph Henrich, Deirdre McCloskey, Joel Mokyr, Douglass North, Kevin Laland, Matt Ridley, and others. Henrich’s The Secret of Our Success struck me the most. Kevin Laland’s Darwin’s Unfinished Symphony deserves mention. I am currently reading Pascal Boyer’s Minds Make Societies, which might end up deserving to be listed here. Ridley’s The Evolution of Everything fits in.

I am captivated by the sociological history spawned in David Hackett-Fischer’s Albion’s Seed, which is a masterpiece. For contemporary sociology/politics, I continue to recommend Martin Gurri’s Revolt of the Public. I often cite Charles Murray’s Coming Apart and Robert Putnam’s Our Kids on the socioeconomic divide that is now clearly visible.

For political economy, I have come to believe that liberal democracy is not an easy equilibrium to achieve. I was very much influenced by North, Weingast, and Wallis (Violence and Social Orders). I also was persuaded by Mark Weiner’s Rule of the Clan.

Another important change is that I have come to see economic modeling in the MIT style as a crippled way of dealing with the complexity of the real world. Influence has come from McCloskey, James Manzi, Edward Leamer, and others. Manzi’s discussion of “causal density” in Uncontrolled deepened my already-existing skepticism of regression modeling.

I got pulled back into macroeconomics by the episode of 2008 and beyond. I was drawn to heterodox views. Maybe Leamer’s Macroeconomic Patterns and Stories is the book that stands out the most. I came to better appreciate Hyman Minsky’s thinking by reading Randall Wray’s Why Minsky Matters.

Somewhat related, I have come to see American economics as “born bad.” Thomas Leonard’s Illiberal Reformers was the eye-opener there.

I have come to view political economy in terms of “This is your brain on politics,” with a lot of tribalism built in. Various anthropologists and psychologists contributed to this view. Also Robin Hanson. Jonathan Haidt’s The Righteous Mind was an early influence.

I have come to view specialization and trade as the core of economics. No one book stands out (Adam Smith clearly falls outside “the last ten years or so”). As much as my views fit with the Austrian school, neither the classics of that tradition nor any modern works are directly responsible. I did enjoy Erwin Dekker’s The Viennese Students of Civilization, which probably counts as one of the books that nudged me to view economics as connected with sociology.

What I’m reading

I was sent a review copy of A Crisis of Beliefs, by Nicola Gennaioli and Andrei Shleifer (henceforth GS). They say that the financial crisis of 2008 illustrates a theory of expectations formation in which market participants both place too much weight on recent news and in some circumstances ignore tail risk.

We know from Tetlock, whose name does not appear in the index, that a good forecaster puts a lot of weight on baseline information–characteristics that are more universal and permanent. Inefficient forecasters instead tend to focus on information that is more recent and local. GS argue that financial market participants are inefficient forecasters.

So far, what I like about the book:

1. The writing is clear.

2. Years ago, I contrasted two classes of theories of the 2008 financial crisis. One I called “moral failure” and the other I called “cognitive failure.” The theory that GS builds falls within that latter class, which is the one on which I would place more weight.

3. GS take seriously data that comes from surveys of the expectations of market participants. They are not afraid to find fault with the rational expectations hypothesis.

What I don’t like:

GS use standard economic modeling methodology, as opposed to Bookstaber’s agent-based modeling. See my review of The End of Theory. In particular, I think that institutional details are important, and Bookstaber’s rich depiction of different classes of market participants is better than a standard mathematical model. Also, I don’t like the idea of collapsing divergent expectations into a single representative agent. Getting away from the representative-agent model is a point in favor of Frydman and Goldberg. Note that Bookstaber, Frydman, and Goldberg do not appear in the index, either.

Mike Munger on non-ownership

You can watch the podcast at Cato (I watched it live yesterday, so the link may be different). The book is Tomorrow 3.0: Transaction costs and the sharing economy. It can be summarized by a remark from one of my commenters.

The commenter writes,

Ownership is a form of market failure:

– Your car being parked 23hrs a day just to ensure that it’s there when you need it.
– Transaction costs of selling/buying your house tying you down and decreasing efficiency of your human capital.

This reminds me of my line to my high school students that “Do It Yourself is market failure.” I had an economist friend who built a deck for his house to “save money.” I pointed out that if he could get paid his economist’s wage rate while working more hours and then paid someone to build the deck, then that would have saved a lot more money. His inability to get paid for marginally more hours worked as an economist was the market failure.

Transaction costs and agency costs related to land are fundamentally important. In theory, the best way for me to own land is to include a well-diversified mutual fund that invests in real estate as part of my portfolio. In practice, transaction costs make me want to stay in a particular dwelling much longer than might otherwise be optimal, and agency costs make it more likely that a property will be well cared for by an owner than by a renter. Overcome those sources of market failure and you make it feasible to own a diversified real estate portfolio instead of being stuck with one home.

Pessimistic meta-induction

Charles Chu explains what it means.

Much of what we believe today is doomed to join other infamous dead theories like Lamarckism (“Giraffes have long necks because they used them a lot.”), bloodletting (“Let me put a leech on your forehead. It’ll cure your allergies. I promise.”), and phrenology (“I’m better than you because I have a bigger head.”).

Philosophers have a name for this concept. To help make it memorable for undergraduates, they kindly titled it the “Pessimistic Meta-Induction from the History of Science”.

The essay makes the case for intellectual humility and for challenging yourself to take the ideological Turing test.

Question from a reader

May I recommend an explanation of what economists mean by “Bayesian”? See it everywhere but, even though I’ve googled the term looking for some simple, understandable definition, I just cannot grasp it.

1. I don’t use that term much, if at all. So maybe someone else should answer it.

2. A Bayesian as opposed to what? In statistics, the opposite is a Frequentist. The difference is one of interpretation, and it shows up, for example, in the interpretation of a confidence interval. Suppose we poll a sample of voters and find that 55 percent support policy X, with a margin of error of + or – 3 percent at a 90 percent confidence interval. A Bayesian statistician would be comfortable saying that these results indicate that there is a 90 percent chance that the true proportion of supporters in the overall population is between 52 and 58. The frequentist philosophy is that the proportion of supporters in the overall population is what it is. You cannot make probability statements about it. What you can say about your confidence interval of 52 to 58 is that if the true proportion of supporters were outside of that interval, the probability that your poll would have found 55 percent supporters is less than 10 percent.

3. By analogy, I would guess that economists use the term Bayesian to describe someone who is willing to make probability statements that describe their degree of belief in a proposition that in practice has to be either true or false. When a weather forecaster says that there is a 20 percent chance of measurable precipitation tomorrow, that sounds like a Bayesian forecast. In the end, we will either have measurable precipitation or we won’t. The “20 percent chance” formulation says something like “I don’t expect rain, but I could turn out to be wrong.

4. “Bayesian” also refers to a process of updating predictions. As new information comes in, the forecaster may say, “Now I think that there is a 40 percent chance of measurable precipitation tomorrow.”

5. Similarly, a statement like “The Democrats will nominate an avowed socialist in 2020” is either going to turn out to be true or false. But a Bayesian would be willing to say something like “I give it a 10 percent chance” and then revise that probability up or down as new information develops.

In this case, the opposite of a Bayesian would be someone with firm beliefs that are not responsive to new information.

Again, I don’t apply the Bayesian label myself, so I am not sure that I am the best person to articulate the intent of thsoe who do use it.