A Recession as a Mood Affiliation

David Tuckett and many others write,

The prototype createsv what is called a relative sentiment shift (RSS) time series. This calculates
changes, in any text database across time, in the number of words related to the category of excitement relative
to the number of words in the category of anxiety, adjusting for the number of words in the articles, etc.
Results suggest that this form of analysis has strong potential for improving our understanding of what is happening in the economy and where policy action might be required.

One digital data source is the Reuters News Archive, which spans 1996 to 2013 and contains over 14 million text documents. Figure 2 shows a relative sentiment shift time series generated from all texts originating in the United States (dashed curve) plotted against US GDP (solid curve). The sharp drop in GDP in the recession of 2008-9 is evident. It is equally evident that relative sentiment series begins to fall well in advance of the decline in GDP.

You might want to read the whole article, which treats a number of methodological issues, not always in ways that I agree. My thoughts on this particular example:

1. For some reason, I am reminded of the way that animals can sense bad weather coming.

2. Perhaps it is not surprising the the participants in the economy sense that things are bad before the economic statistics reflect that. And note that policy makers receive economics statistics with a lag.

3. This sort of analysis does not tell us anything about what is causing anxiety to rise relative to excitement, or what to do about it.

Behavioral Non-Science

Slavisa Tasic and Zeljka Buturovic write,

While it is difficult to gauge the cost of various decision-making errors with any precision, it may be worth contrasting them against the costs of mistakes that clearly have nothing to do with cognitive biases: the cost of choosing a profession one ends up hating, the cost of not finding a suitable mate, the cost of having children too early in life or too late, the cost of moving to a place one ends up disliking, the cost of adopting a pet or sending children to a private school, and so on. These types of decisions–i.e., actual, important decisions in which errors are genuinely costly–are not typically studied in depth. . .Faced with difficulties in assessing the accuracy of the outcome of social judgments in the real world, the field [behavioral economics] has produced various norms of judgment against which to judge human performance, but only in highly artificial settings.

My view:

1. Economics is non-experimental. Instead, we work with interpretive frameworks that cannot be falsified empirically. This means that economic models do not have the epistemic status of models in the physical sciences, which can be falsified through experiments. All of our interpretive frameworks have some degree of plausibility but also are challenged by real-world anomalies. Economists can differ in their willingness to tolerate anomalies in their preferred interpretive frameworks.

2. Behavioral economics is experimental, but the experiments test people making minor decisions in peculiar, isolated settings.

3. Therefore, I go back to (1).

Questions for Garett Jones

After a quick reading of Hive Mind. The core issue is what he calls the paradox of IQ. That is, among individuals, the correlation between IQ and income is modest. However, among nations, the correlation between average IQ and average income is strong.

How does your high IQ raise my income? Think of four possible explanations for this paradox.

1. Statistical artifact.
2. Proximity effect–I earn more income by living close to people with high IQ’s.
3. Cultural effect–people with high IQ’s transmit good cultural traits to me.
4. Political effect–having people with high IQ in my jurisdiction leads to me enjoying better government.

Can we rule out statistical artifact? Put it this way. Suppose we chose 1000 people at random. Then we create 50 groups of them. Group 1 has the 20 lowest IQ scores. Group 2 had the next 20 lowest IQ scores, etc. Then we run a regression of group average income on group average IQ for this sample of 50 groups. My prediction is that the correlation would be much higher than you would get if you just took the original sample of 1000 and did a correlation of IQ and income. I think that this is because grouped data will filter out noise well. Perhaps the stronger correlation among national averages is just a result of using (crudely) grouped data.

Can we sort out between proximity effects, cultural effects, and political effects? Perhaps a natural experiment involving people from different cultures living moving to different jurisdictions, or people living close to one another but having different cultures?

The most parsimonious proximity effect could be capital per worker. Assume that people tend to invest close to home (Jones calls this the Feldstein-Horioka effect when it applies across countries). Then if high-IQ people invest more wisely, then I will have better capital to work with if I live close to high-IQ people. Or if high-IQ people invest more (because, as Jones points out, they are more patient), then I will have more capital to work with if I live close to high-IQ people. How well does capital per worker serve as a channel for transmitting someone else’s IQ to my income?

Another proximity effect would be strong complementarity in team production (what Jones, following Kremer, calls the O-Ring effect). If the value of my output depends on the value of others in a team, then I will be better off living close to people with high IQ’s.

What happens when you divide the U.S. into fifty states and put teach state into the database with other countries? My guess is that Mississippi will look really good on average income relative to average IQ when you compare it with Denmark. If so, is that because of high capital per worker in Mississippi? A higher trust culture? Or better overall governance than Denmark?

Online Education Do’s and Don’ts

From Peter Navarro.

DO break each of the presentations up into short modules. A good guideline is that such modules should be 3 to 7 minutes, and never exceed that limit.

…Do NOT wing it. I always write scripts for everything I record either on camera or as voiceover—umms, ahs, and awkward pauses or rambling threads just don’t cut it with today’s discerning students.

This latter is a problem for me. On the one hand, I um and ah a lot, which argues for scripting. On the other hand, reading a scripted presentation is too boring for me.

In fact, if you ever encounter me giving a live event, you will find the prepared presentation disappointing and the Q&A to be the best part.

All in all, although I have done a lot of instructional Youtube videos, I do not believe that it is my comparative advantage.

Navarro adds this:

After cascades of student complaints, Coursera decided to experiment with the on-demand format, with me as one of their first guinea pigs. With this approach, eager beavers can now “binge” their way through my Coursera courses as fast as Netflix users have gone through a season of House of Cards. At the other end of this on-demand spectrum, slow pokes can turn a normal ten-week race into a six-month marathon—and thereby better avoid contributing to the high drop-out rate symptomatic of MOOCs

I really think that the issue goes beyond this. The idea of a “course” may be an unnatural construct in the on-line world. As a student, you tend to be more interested in the parts than in the whole. When you want to get into a large body of material, a book may be the best format.

Ecologists and Engineers

Don Boudreaux writes,

When a biologist encounters in a living organism a physical or behavioral trait that is unusual or unfamiliar, and that does not contribute to survival in any way that is immediately obvious, the biologist’s professional instinct is to think hard about that trait in order to identify its likely genetic benefit to its possessor. The biologist, upon encountering such a trait, does not leap to the conclusion that he or she has encountered an instance of “nature failure.” The biologist, of course, recognizes that nature and natural selection are never perfect; sometimes living creatures are indeed saddled with traits that do indeed reduce their genes’ chances of survival. But this possibility of “nature failure” is not the competent biologist’s first go-to explanation whenever he or she cannot grasp the reason why natural selection might have created in the organism this unusual or unfamiliar trait.

In his new book, Foolproof, Greg Ip suggests that there are two types of economists: ecologists; and engineers.

An engineer thinks about how to design a machine. An ecologist thinks about how to understand and protect an evolving system.

In The Book of Arnold, I suggest that after the Second World War, the MIT economics department, fed by funding from the Department of Defense, promoted the engineering mindset. This mindset then took over the ecosystem of academic economics, which those of us with the ecological mindset struggle against.

Oops, Maybe You Should Not Annuitize

Felix Reichling and Kent Smetters write (gated–ungated version here),

But the presence of stochastic mortality probabilities also introduces a correlated risk. After a negative shock to health that reduces a household’s life expectancy, the present value of the annuity stream falls. At the same time, a negative health shock produces potential losses, including lost wage income not replaced by disability insurance, out-of-pocket medical costs, and uninsured nursing care expenses, that may increase a household’s marginal utility. Since the value of non-annuitized wealth is not affected by one’s health state, the optimal level of annuitization falls below 100 percent.

I once wrote,

An annuity is risk-reducing if the only risk you face is additional longevity. In fact, other risks may be more serious. You could easily find yourself needing to take out a loan if your savings are tied up in an annuity and your spouse requires a home health aide.

Economists have been preaching for 50 years that the low usage of annuities illustrates a market failure. In fact, what it may illustrate is that economists who relied on a mathematical model left out some important considerations. We need a term for this. I propose model failure.

Proper Critiques of Economics

Noah Smith writes,

some econ literatures are still crammed with mutually contradictory models for which the scope conditions are neither known nor specified. And the stock of existing theories is still enormous. In some areas, especially in macro, economists really do have theories that make almost any prediction, with no real way to choose between them except priors and politics. And many economists still have very little problem using modeling assumptions that have already been taken to the data with discouraging results.

Pointer from Mark Thoma. In his post, Smith tries to “score” various criticisms of economists. His post made me want to recycle a quote from Herbert Stein:

1. Economists do not know very much.

2. Other people, including politicians who make economic policy, know even less about economics than economists do.

[typo corrected]
Non-economists are responsible for many of the critiques of economists to which Smith gives a low score.

I have come to believe that economics is epistemologically difficult. That is, it is difficult to answer the question, “How do you know that?” Non-economists do not have much insight into this issue. Unfortunately, many economists lack insight as well.

The appeal of the mathematical approach is that it provides rigorous connections between assumptions and conclusions. The weakness of the mathematical approach is that it places tremendous pressure on one’s choice of assumptions. And, as Smith has pointed out, these choices are more arbitrary than they are in the hard sciences.

Economists can almost never directly test their assumptions. Milton Friedman famously suggested not worrying about direct testing. Instead, he proposed the indirect approach of testing predictions. In practice, however, this does not work, or at least it does not work cleanly.

One problem is that you can have two interpretive frameworks that both “predict” one observed phenomenon yet have different predictions about other phenomena about which we do not have precise observations. Consider the vast array of candidate explanations for the financial crisis, with widely varying implications about how one might try to prevent a recurrence.

Another problem is that when an anomalous observation appears to confound an interpretive framework, this fails to result in a decisive rejection of that framework. Instead, the framework is tweaked in order to accommodate the observation. So, when the huge fiscal contraction in the United States at the end of World War II did not lead to another Great Depression, the explanation might be “pent-up consumer demand.” When the inflation rate failed to obey the Phillips Curve in the 1970s, the explanation might be “supply shocks” and/or “higher expectations of inflation.”

If assumptions cannot be tested directly, and Friedman’s proposal to test predictions does not work, how will assumptions be chosen? The answer, all too often, is a combination of mathematical tractability and faddism. Economists will jump all over a model because it is fun to play with, regardless of how silly or irrelevant the set of assumptions may be. The overlapping-generations model of money would be a prime example.

My main concerns with mainstream economics include:

1. A bias toward “engineers” rather than “ecologists.” That distinction comes from Greg Ip’s new book, Foolproof. The engineer is like Adam Smith’s man of system, who ignores evolution, both as a factor that may permit markets to over come their own failures and as a factor that may cause government “solutions” to become obsolete.

2. A bias toward simplifying the phenomenon of specialization. Macroeconomists live in a world with one producer and one consumer (the “representative agent.”) Microeconomists live in a 2x2x2 world, with two factors of production, two goods, and two producers. They miss important differences between those worlds and the real world of millions of tasks being performed to lead to a final product.

Angus Deaton vs. The Representative Agent

His Nobel citation says,

The insights provided by Deaton’s work on consumption and income have had a lasting influence on
modern macroeconomic research. Previous researchers in macroeconomics, from Keynes onwards, had
relied only on aggregate data. Even if their purpose is to understand relationships at a macro level, today’s
researchers usually start at the individual level and then, with great caution, add together individual
behaviors to compute numbers for the entire economy.

What Else Would be True?

Chris Dillow writes,

we should remember the Big Facts. For example, one the Big Facts in finance is that active equity fund managers rarely beat the market for very long, at least after fees. This, as much as Campbell Harvey’s statistical work reminds us to be wary of the hundreds of papers claiming to find factors that beat the market.

Pointer from Mark Thoma

This is a good example of asking, “What else would be true?” When you are inclined to believe that a study shows X, consider all of the implications of X. In the example above, Dillow is suggesting that if one finds that there is some factor that allows one to earn above-market returns, how do we reconcile that with the fact that we do not observe active fund managers earning above-market returns?

Recall that I raised a similar question about the purported finding that in the United States worker earnings have gone nowhere as productivity increased. This should greatly increase the demand for labor. It should greatly increase international competitiveness, turning us into an export powerhouse. Since I do not see either of those taking place, and since many economists have pointed to flaws in the construction of the comparison of earnings and productivity, I think this makes the purported finding highly suspect.

In contrast, consider the view that assortative mating has increased and plays an important role in inequality. I have not seen anyone say, “IF that were true, then we would expect to observe Y, and Y has not happened.”

I think that this is the way to evaluate interpretive frameworks in economics. Consider many possible implications of an interpretive framework. Relative to those implications, do we observe anomalies? When you have several anomalies, you may choose to overlook them or to explain them away, but you should at least treat the anomalies as caution flags. If instead you keep finding other phenomena that are consistent with the interpretive framework, then that should make you more comfortable with using that framework.

Poor Replication in Economics

Andrew C. Chang and Phillip Li write,

we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable.

Pointer from Mark Thoma.

As an undergraduate at Swarthmore, I took Bernie Saffran’s econometrics course. The assignment was to find a paper, replicate the findings, and then try some alternative specifications. The paper I chose to replicate was a classic article by Marc Nerlove, using adaptive expectations. The data he used were from a Department of Agriculture publication. There was a copy of that publication at the University of Pennsylvania, so I went to their library and photocopied the relevant pages. I typed the data, put into the computer at Swarthmore–and got results that were nowhere close to Nerlove’s.