Andrew Gelman is Too Glib

Not in general, but in this post, where he writes,

I’d like to flip it around and say: If we see something statistically significant (in a non-preregistered study), we can’t say much, because garden of forking paths. But if a comparison is not statistically significant, we’ve learned that the noise is too large to distinguish any signal, and that can be important.

Pointer from Mark Thoma. My thoughts:

1. Just as an aside, economists are sometimes (often?0 guilty of treating absence of evidence as evidence of absence. For example, if you fail to reject the efficient markets hypothesis, can you treat that as evidence in favor of the EMH? Many have. Similarly, when Bob Hall could not reject the random walk model of consumer spending, he said that this was evidence in favor of rational expectations and consumption smoothing.

2. I think that a simpler way to make Gelman’s point would be to say that passing a statistical significance test is a necessary but not a sufficient condition for declaring the evidence to be persuasive. In particular, one must also address the “selection bias” problem, which is that results that pass significance tests are more likely to be written up and published than results that fail to do so.

Apparently, Gregory Clark is Not Most Economists

Gillian B. White writes,

According to the Fed study, about 60 percent of black children whose parents had income that fell into the top 50 percent of the distribution saw their own income fall into the bottom half during adulthood. This type of downward slide was common for only 36 percent of white children.

…Still, most economists lack a clear, definitive explanation for why, after reaching the middle class, many black American families quickly lose that status as their children fall behind.

Pointer from Mark Thoma.

Obviously, she did not read my review of Gregory Clark’s latest book.

Clark suggests that this may reflect that the underlying mean for these ethnic groups may differ, and the higher propensity of middle-income blacks and Hispanics to have their children’s income fall to the bottom third might be due to regression toward a lower mean.

Suppose that you have two populations of men with different height-producing genetic characteristics. The mean height in group A is 5 feet, 9 inches, and the mean height in group B is 5 feet, 7 inches. There is substantial variation within each group.

Now, out of the current generation of men, you select men from each group who happen to be 5 feet, 8 inches. Track the height of their sons. It seems reasonable to predict that, starting with men who are 5 feet 8 inches, the sons of men from group A are likely to be taller than the sons of men from group B. This does not result from social prejudice against men from group B. It is the result of laws of probability.

The Returns to College Going Forward

Nick Bunker writes,

Intuitively, then, increasing the supply of educated workers should reduce inequality as it would increase wages among a broader supply of more educated workers. But that assumes the demand for educated workers will continue to rise. Problem is, recent research finds that the demand for skilled labor appears to be on the decline.

Pointer from Mark Thoma.

Let us think about the “race between education and technology” idea. The Goldin-Katz story is that the high school movement helped produce a work force that could earn decent incomes in the industrial era. This is a nice just-so story, but note that in the late 19th and early 20th centuries the just-so story was that industrialization was reducing the demand for skills, replacing the craftsman with the assembly-line worker.

But let us suppose that more education is needed to enable the typical worker to keep pace with changes in technology. That is, suppose we buy that there is a race between education and technology. In that case, I am pretty sure that education has to lose that race.

Change in technology is being led by Moore’s Law. The core components of computers get twice as good every couple of years. Maybe that is slowing down a bit. But even so, it is much faster than the rate of improvement in steam engines in the 19th century or electric motors in the 20th century.

As an indicator of faster technological change, look at how much more quickly smart phones achieved mass adoption in comparison with personal computers.

As an indicator of how hard it is for humans to keep up, look at computers and chess. Twenty years ago, the world’s biggest computer could not have beaten the human world champion. Now, you could to it with a laptop. Maybe even a smart phone.

The metaphor of a “race” suggests that the two participants are capable of moving at the same speed. But if you compare Moore’s Law with the highest feasible rate at which me might increase educational attainment, you realize that the two speeds are hopelessly different. Either we come up with some radical, paradigm-shifting way of improving human learning capacity (genetic engineeering? implants? Diamond Age primers?) or the machines are certain to win.

Raj Chetty on Empiricism Without Theory

The talk is here. Pointer from Tyler Cowen.

Broadly speaking, Chetty makes two points. One is that behavioral economics has inspired empirical analysis that can be useful for policy. The other is that we do not have to care about theory. Although theory might guide us to try certain empirical studies and might explain why a policy will work, all we need is the empiricism to know that a policy will work.

I found this view at best shallow and at worst not persuasive. Take one of his examples, in which a study guided by the theory of Loss Aversion found that classroom results improve more when incentives are framed in terms of what teachers will lose if results are bad than in terms of bonuses they will earn if results are good. My thoughts:

1. Given that the Null Hypothesis tends to be more robust than studies that purport to show significant effects from educational interventions, one ought to be pretty cautious about this.

2. A non-empirical a priorist of a Misesian bent, without knowing any behavioral economics, would recommend a market-provided school over a government-provided school. Among other advantages, the market will tend to punish poor performance, as markets tend to do. So in this example, without doing any empirical work at all, one can arrive at a recommendation that would be at least as effective as anything that Chetty might propose. While I am no Misesian, I do find the generic arguments against central planning more compelling than just about any empirical finding suggesting nudging opportunities.

3. I think that those who advocate behavioral economics would do well to also acknowledge behavioral politics. If Chetty understood how teachers unions operate where I live, he might be feel less a bit less excited about the opportunities for reform using his findings.

Related: Noah Smith on how the profession has been moving left.

Economics has become much more empirical, and that has made it much harder to wave away the possibility of market inefficiencies.

Pointer from Mark Thoma.

Narrative is the New Baloney Sandwich

Brad DeLong writes,

When it became clear in late 2008 that the orgy of deregulation coupled with global imbalances was confronting the global economy with a shock at least as dangerous as the Great Crash that had initiated the Great Depression. . .

Pointer from Mark Thoma.

Noting that on this year’s Washington Post list, “narrative” is in the “out” column, to be replaced this year by “facts,” I resolve in 2015 to use the phrase “I call Narrative” where I would have said “I call Baloney Sandwich.”

The phrase “orgy of deregulation” is a much-used narrative/baloney sandwich. Others have used it. Interestingly, in the version of the essay that appears on Project Syndicate, DeLong does not use it.

1. The facts are that one can just as easily blame the financial crash on an attempted tightening of regulation. That is, in the process of trying to rein in bank risk-taking by adopting risk-based capital regulations, regulators gave preference to highly-rated mortgage-backed securities, which in turn led to the manufacturing of such securities out of sub-prime loans.

2. The global imbalances that many of us thought were a bigger risk factor than the housing bubble did not in fact blow up the way that we thought that they would. The housing bubble blew up instead.

3. I call narrative whenever someone talks about the causes of the financial crisis without making any reference to looser mortgage lendings standards and/or without mentioning that government policies were hostile not to those institutions who dropped rigorous lending standards but to those who attempted to maintain them.

Chris Dillow on Complexity Economics

No, it’s not another review of Colander and Kupers (but I wonder what he would think of it). He writes,

One feature of complexity economics is that recessions can be caused not merely by shocks but rather by interactions between companies. Tens of thousands of firms fail every year. Mostly, these failures don’t have macroeconomic significance. But sometimes – as with the Fukushima nuclear power plant or Lehman Brothers – they do. Why the difference? A big part of the answer lies in networks. If a firm is a hub in a tight network, its collapse will cause a fall in output elsewhere. If, however, the network is loose, this will not happen; the loss of the firm is not so critical. Daron Acemoglu has formalized this in an important paper, and there are some good surveys of network economics in the latest JEP.

Read the whole thing. Pointer from Mark Thoma. My thoughts:

1. From a PSST perspective, the importance of a highly-connected firm makes sense. The more connected a firm is, the more patterns of specialization and trade depend on that firm. Also, this may help to explain why shocks in the economy do not average out. A shock that suddenly destroys a highly-connected firm is not going to simultaneously create an equal a highly-connected firm somewhere else. My guess is that dense networks of connection are both difficult to create and difficult to destroy, but they can be destroyed more rapidly than they can be created.

2. Note that complexity economics attracts some attention from heterodox economists on both the left and the right.

3. Dillow thinks that complexity economics deserves more attention. I agree that one reason it tends to be overlooked is that it does not provide the clarity of prediction and tidyness of results that is sought by mainstream economists.

4. Mainstream economists and complexity economists would agree that the world is complex and that models are simplifications. Mainstream economists emphasize the virtues of simple models, while heterodox economists emphasize the vices.

Brad DeLong Starts a Labor Market Chartfight

He writes,

If the US economy were operating at its productive potential, the share of 25 to 54-year-olds who are employed ought to be what it was at the start of 2000. Back then there were few visible pressures leading to rising inflation in the economy.

Does anybody disagree with that?

Read the whole thing. Pointer from Mark Thoma.

When Bill James was writing his annual Baseball Abstract, you could say that his main goal was to focus attention away from meaningless baseball statistics and toward better metrics. In that spirit, I think that looking at prime-age employment-population ratios, broken down by gender, is a valuable approach. That is what Brad has done with this chart. However, I want to offer a different way of looking at it (and this way could be helpful or it could e misleading).

As I read his chart, between 2000 and 2006, the employment-population rates for males and females each dropped by about 2 percentage points. Most recently, they were 5 percentage points and 4.5 percentage points below 2000 levels. So, one way to look at the chart is that if you drew a trend line from 2000 to 2006, and then extended that trend line to 2014, the line would hit pretty close to the current numbers.

I do not mean to dispute in any way DeLong’s larger point, which is that the Fed is nuts to be more worried about inflation than labor market slack at the moment. But as you know, I don’t much care what the Fed worries about or does not worry about. I am inclined to see structural forces at work in the data, and the “trend line trick” is sufficient to fit the data to my priors.

We are Re-living 2003

Describing the latest Fed pronouncement, David Andolfatto writes,

how new are these buzzwords? They’re not new at all. Consider this from the December 09, 2003 FOMC statement

I have said before that the economy resembles 2003. Output has recovered more strongly than employment. Long-term bond rates are puzzlingly low. House prices have been rising (quite rapidly near us in suburban Maryland). Policy makers are trying to loosen mortgage credit.

UPDATE: See also Mark Thoma/Tim Duy.

Seeing the Cloud in the Silver Lining

Carl Benedikt Frey writes,

The problem is that most industries formed since 2000—electronic auctions, Internet news publishers, social-networking sites, and video- and audio-streaming services, all of which appeared in official industry classifications for the first time in 2010—employ far fewer people than earlier computer-based industries. Whereas in 2013 IBM and Dell employed 431,212 and 108,800 workers, respectively, Facebook employed only 8,348 as of last September.

The reason these businesses spin off so few jobs is that they require so little capital to get started. According to a recent survey of 96 mobile app developers, for example, the average cost to develop an app was $6,453. Instant-messaging software firm WhatsApp started with a relatively meager $250,000; it employed just 55 workers at the time Facebook announced it was buying the company for $19 billion. All of which explains why new technologies throughout the 2000s have brought forth so few new jobs.

Pointer from Mark Thoma.

My thoughts.

1. Either IBM and Dell produced much more output than Facebook, or Facebook exhibits much higher productivity. Of course, valuing the output of IBM and Dell is difficult, and valuing the output of Facebook is impossible.

2. Without saying so, Frey is complaining about high productivity growth.

3. Frey does not point out that the official productivity statistics do not show high productivity growth. Not that I am a proponent of the official productivity statistics.

4. Frey does not point out that most economists view high productivity growth as a good thing.

5. I think that most non-economists (and maybe even some economists) do not realize that Thiel-Cowen stagnation is incompatible with Summers stagnation. The former is a story of disappointingly low productivity growth, and the latter is a story of “excess” productivity growth. I personally do not buy either stagnation story.

6. If you think that the media likes bad news, then they are bound to like either stagnation story (or both simultaneously, even though they contradict one another). The media deck is stacked against optimists. I would say that it is even stacked against realists.

Budget Uncertainty

CBO Director Elmendorf writes,

in some situations, legislators might want to adopt policies with a smaller variance of budgetary effects in order to reduce the risk of large fiscal problems. For example, understanding the extent of uncertainty about future federal spending that arises from uncertainty about lifespans might affect whether policymakers want to index eligibility ages for certain programs to lifespans.

Pointer from Mark Thoma.

A couple years back when some CBO staff invited me to have a discussion of their work, I gave two complaints. One was that their macroeconomic forecasting was treated as “scoring” by the public, and I thought that they needed to make clear the tenuous nature of macro modeling. The other was that I thought that they should be augmenting point forecasting of the budget with scenario analysis.

I think that reporting a number like “forecast variance” would not be helpful to politicians. However, reporting what would happen under particular scenarios, such as increased longevity, higher interest rates, or lower house prices, could be very useful.