The Solow Model: is it the GOAT?

It is probably too late to write a biography of Robert Solow. He has outlived his teachers, his peers, and even some of his students, which makes it difficult to gather material. I do think that biographies of economists can be very insightful, and I would encourage any young economist or economic journalist to search for an interesting subject. Of the living Nobel Laureates, the ones I would most like to read about are Solow, Vernon Smith, George Akerlof, and Robert Merton. Probably also Paul Romer, although there is some discussion of him in David Warsh’s Knowledge and the Wealth of Nations. From the business world, Hal Varian and Bob Litterman come to mind, although I am no doubt forgetting a number of interesting business economists.

Here, I will sketch my experiences with Solow and my impressions of him. Continue reading

Adversity and SAT scores

The WSJ, had an article in the print edition on November 27 that I cannot find on line (their search function is not helpful). The print article was called ‘Adversity’ Has Big Effect on SAT Scores. What I can find online instead is this:What Happens if SAT Scores Consider Aversity? Find Your School.

Anyway, the WSJ uses a Georgetown education researcher’s regression equation relating SAT scores to “adversity scores” to make inferences such as

Top public magnet schools performed exceptionally well in adjusted SAT scores, meaning their scores jump when adversity is accounted for.

To see why this is not a valid inference, suppose that there were two students of identical backgrounds but different ability levels. Presumably, the magnet school would select the student with higher ability, leaving the other student to attend a regular school. The more able student would get a higher SAT score, but that would say nothing about the magnet school’s “performance.”

I sent a letter to the editor of the WSJ about this, but they did not print it. But I hope that someone there gets the message that this was statistical malpractice.

Doubts about teacher value added

Marianne Bitner and others write,

Using administrative data from New York City, we find estimated teacher “effects” on height that are comparable in magnitude to actual teacher effects on math and ELA achievement, 0.22: compared to 0.29: and 0.26: respectively. On its face, such results raise concerns about the validity of these models.

. . .our results provide a cautionary tale for the naïve application of VAMs to teacher
evaluation and other settings. They point to the possibility of the misidentification of sizable teacher
“effects” where none exist. These effects may be due in part to spurious variation driven by the typically
small samples of children used to estimate a teacher’s individual effect.

VAMs = value-added measures. Pointer from a reader. I note that some recent NBER working papers are now free downloads. Others are not. This one is.

Lest you miss the point, this paper shows that the same methods that purport to show an effect of teachers on student achievement also show an effect of teachers on student height. But the effect of teachers on height is almost surely spurious. So the effect of teachers on achievement may also be spurious.

1. This provides vindication for Jerry Muller’s The Tyranny of Metrics.

2. It provides support for the Null Hypothesis.

3. The research that seemed to show a big effect of teachers (e.g., Raj Chetty on kindergarten teachers) got a lot of play in the press. But that had social desirability bias going for it. I would be surprised if this paper receives similar notice.