My conversation with Eric Weinstein

I talked about one area where we disagree and one area where we agree.

Let’s start with where we disagree. I take the conventional economic view in favor of international trade, and you differ.

Let me see if I can steel-man your argument. You say that American workers, as citizens of this country, should have a right to access to job opportunities that give them a decent way of life. If we are willing to have their family members go off to fight wars in the name of protecting the rest of us from terrorists, then we certainly owe them protection from having their jobs taken away by outsourcing to Chinese factories.

My counter will be that international trade is isomorphic with other economic actions that you are more likely to approve. Outsourcing to a factory in China and taking away a factory worker’s job is not very different from developing Uber and taking away a taxi driver’s job or a rental-car agent’s job.

Our prosperity comes from breaking production down into steps. When you break the process down into steps, you get more efficiency. This goes back to Adam Smith’s pin factory. Breaking a process into steps can involve what most economists call capital, but the Austrian economists use the term “roundabout production,” which I like. When a farmer uses a tractor instead of a horse to pull a plow, this is roundabout production–manufacturing the tractor becomes a step in the farming process.

International trade is another form of roundabout production. As David Friedman put it, one way to manufacture an automobile is to grow wheat, put it on a ship to Japan, and have the ship turn around carrying an automobile.

The process of breaking production down into steps is mind-boggling in its complexity. There are so many conceivable ways to break down a production process into different steps. How are we to know which is best? The answer is that the price system co-ordinates the process. Prices inform entrepreneurs about the costs of alternative patterns of specialization.

The profit system directs the evolution of the process. As new ideas are tried, the most efficient ones prove sustainable, as indicated by profitability. Less efficient patterns of specialization and trade are weeded out by losses.

Thus, progress proceeds by creative destruction. Ways of life that are tied to a particular step in the production process are bound to be undermined if a new production process emerges that is more efficient. A society cannot enjoy the benefits of economic progress without incurring the cost of job destruction. The market treats work as a bug, not a feature, and it tries to get rid of it.

Back to the comparison of outsourcing to a factory in China or developing Uber. You might be tempted to say that when Uber changes the process of providing people with car rides, at least it doesn’t use Chinese labor in the process. But is that really the case? For Uber to work, somebody has to take the step of adding computer and communications capacity, and that probably uses components imported from China. Consumers need smart phones in order to hail rides, and those phones are partially manufactured in China. And even if there were no Chinese workers involved in the steps to create Uber rides, would that be any consolation to the taxi drivers and rental-car agents who lose their jobs?

If you want to suggest policies for making economic progress less painful for people whose jobs are displaced, that would be very constructive. But insinuating that economists are engaged in a conspiracy to hide the truth about international trade isn’t constructive–it’s just scapegoating.

On the area where we agree, I said,

I’m more in agreement with you on what you call the DISC, which I believe stands for Distributed Information Suppression Complex. Although once again, it sounds a bit too conspiratorial for my taste, and I prefer to think of it in terms of an emergent phenomenon.

Think of life in academic research as consisting of two games. If you play Game One, you pose important questions within your field and try to answer them. If you play Game Two, you try to climb the ladder of prestige by participating in the latest fads and fashions and by ingratiating yourself to people who are in a position to help you get jobs and publication acceptances. Let me use the Game One, Game Two model to offer my take on the DISC.

1. I can imagine a world in which the strategies for playing Game One and Game Two are basically the same. When that sort of Divine Coincidence exists, you will see a very vibrant academic discipline.

2. I don’t think that anyone ever consciously chooses between playing Game One and Game Two. We just go with our instincts. When I was in grad school in the late 1970s, my instinct just happened to be to play Game One. But by that point in time in economics, the profession was selecting away from Game One types and in favor of particularly ruthless Game Two types.

[Note: As John Cochrane wrote recently,

Self-interest, for people to preserve hard-won human capital, and for institutions to support research that keeps them going, is a powerful explanatory force. Even if individuals do not respond to this incentive, and are all pure in their pursuit of ideas, selection is a powerful explanatory force. Economics is a good way to explain economics!

]

The Game Twoers of my era wrote dissertations on Rational Expectations Macroeconomics, which I thought was a dead end. Nothing that has happened since has changed my mind about that.

When I was on the job market, an assistant professor from Amherst came to MIT to interview all of us on the market that year. I gave him a copy of my job market paper, and I talked about it with him. He never offered me an opportunity to audition for a job at Amherst. But he did subsequently publish my exact idea, including a new term that I introduced, called “reputation price,” meaning the price that consumers would expect to see at a store based on their last purchase there. He published it in the Quarterly Journal of Economics, which has typically been a top-five journal, although at that time it was more in the 6-10 tier. No attribution to me of course. I was lucky just to get a version of my dissertation published in Economic Inquiry, a much lower-tier journal.

Why didn’t I go after the guy? My dissertation supervisor, Robert Solow, advised me not to. Even though I am still bitter about the Amherst guy (who got tenure), and bitter about Solow’s nonchalance about it, I have to admit that there is nothing that going after the guy would have done to improve my life, which has turned out pretty well if I may say so.

Anyway, such was my introduction to Game Two.

3. I think that in the last half of the twentieth century, Game Two economics produced little gain from a Game One perspective, and arguably a net loss.

4. I agree very much with your view that academic economists have been slow to come to terms with the fact that the Internet enables businesses to deliver content to consumers at essentially zero marginal cost, but with some fixed costs. One of my lines is that “Information wants to be free, but people need to get paid.” If you want to say that this implies widespread market failure in a textbook sense, I could agree to that. But widespread market failure in no way ensures widespread government success.

Note that there is no link, because this conversation only took place in my imagination.

Differences in suicide concentration

Scott Alexander writes,

While genetics or culture may matter a little, overall I am just going to end with a blanket recommendation to avoid being part of any small circumpolar ethnic group that has just discovered alcohol.

That is at the end of a long and typically careful analysis of parts of the world that have high suicide rates.

Because suicide is a rare event, it is very difficult to make inferences from data. Scott, as usual, does a good job of being careful. One note that I would add is that Case and Deaton observed that in the U.S., suicide rates are higher in states with low population density. I don’t know if this is just coincidence or in fact there is something protective against suicide about high population density.

Me vs. the DISC

1. One of Eric Weinstein’s catch-phrases is the DISC, which I think stands for the Distributed Information Suppression Complex.

2. Recently, I was asked if I want to contribute some sections to a guide for college students of first-year economics. In looking at the guide, I was reminded of my frustrations with mainstream economics. The GDP factory. The failure to appreciate intangible factors. The failure to incorporate the business problems posed by the Internet into mainstream courses. My seemingly hopeless moonshot to overthrow neoclassical economics. My attempt with Specialization and Trade that fell with a thud. etc.

3. One idea that I extracted from Jeffrey Friedman’s turgid prose is that the economics profession probably selects for those who believe in and desire technocratic power. That seems to me what drives the DISC in economics. It leads to things like Raj Chetty’s project.

A central part of Opportunity Insights’ mission is to train the next generation of researchers and policy leaders on methods to study and improve economic opportunity and related social problems. This page provides lecture materials and videos for a course entitled “Using Big Data Solve Economic and Social Problems,” taught by Raj Chetty and Greg Bruich at Harvard University.

Gosh, if you were to just link data from tax returns, credit bureaus, and Google searches, imagine how well “seeing like a state” could work. Ugh.

4. Unfortunately, I am Bill. Let me tell you the story of Bill. In 1990, I was promoted to a low-level management position in charge of five people inside Financial Research at Freddie Mac. One of the staff I inherited was Bill. Bill was a very bright guy, the sort who is called a “computer genius” by people who are intimidated by computers, and even by some who are not intimidated. He was older, in his fifties, with the title of “economist” but doing the work of a glorified research assistant. Bill had bounced around different departments at Freddie Mac, as one supervisor would unload him for his performance issues and another would pick him up for his potential and background.

Bill was very popular with the other staff. When they had a gnarly problem in SAS or with installing new software on a PC (this was a challenge in those days), he would help. Unfortunately, he found these problems so interesting that he would gladly drop whatever assignment you gave him in order to work on the tech issues. So if he was supposed to run a report that I needed for a meeting with top management the next day, I could not count on him to do it. He was very distractable.

One day, he distractedly wandered through the tape library for Freddie Mac’s mainframe computers. I have no idea why. He pulled down a tape and, lo and behold, he found data that had been missing for years. It was data from loans that were originated in the late 1970s and early 1980s. The data was no longer needed for processing the loans, but it was priceless for research purposes. We could now correlate default rates to data from loan applications, such as the original loan-to-value ratio.

I soon hired another research assistant, Sudha. She was far from brilliant, and her computer skills were weak, but she was meticulous and organized. The other staff, who loved Bill, resented Sudha, especially because Bill always ended up doing the work for Sudha’s memos. But when I left my position, my replacement soon said to me, “Now I understand what you were doing. You needed Sudha in order to get Bill’s projects done.”

So I am Bill. I am distractable. That is who I am. That is where I live. Being distractable perhaps enables me to discover insights. But it also is a weakness. If I were like Bryan Caplan, I would spend several years delving deeply into a topic and come out with a compelling book. Maybe somebody needs to find a Sudha to pair with me.

How to reduce the racial gap in reading scores

According to this study, the problem is worse in progressive cities.

Progressive cities, on average, have achievement gaps in math and reading that are 15 and 13 percentage points higher than in conservative cities, respectively

Pointer from Stephen Green, who sees it as an argument for cities to start to vote Republican.

The study compared test scores in the 12 most progressive cities (according to an independent measure) and the 12 most conservative cities. They report the results in tables. I saw a red flag in that they focused on the achievement gap, rather than black achievement scores per se.

From a Null Hypothesis, perspective, one way to reduce the racial gap is to start with dumber white students. Then when differences in schooling have no effect, you wind up with a smaller racial gap.

Using their tables, I got that for reading, the median score in the conservative cities for blacks was 24.5, and in the progressive cities it was 20.5. The median score in the conservative cities for whites was 61.5 and in progressive cities it was 69. Since much of the difference in the gap seems to come from lower test scores for whites, I am inclined to go with the Null Hypothesis interpretation.

Dalton Conley on polygenic scores

At the AEI, Dalton Conley commented on Charles Murray’s new book. At minute 30, Conley starts to discuss polygenic scores. At around minute 35, he points out that the polygenic score for height, which seems to do much better than polygenic scores for other traits, still does a terrible job. The score, which has been based primarily on data from Europeans, under-predicts heights of Africans by 6 inches.

As you know, I am a skeptic on polygenic scores. The exercise reminds me too much of macroeconomic modeling. Economic history did not design the types of experiments that we need in order to gauge the effect of fiscal and monetary policy. What we want are lots of time periods in which every little changed other than fiscal and monetary policy. But we don’t have that. And as you increase the sample size by, say, going back in time and adding older decades to your data set, you add all sorts of new potential causal variables. Go back 70 years and fluctuations are centered in steel and automobiles. Go back 150 years and they are centered in the farm sector.

Similarly, evolution did not design the types of experiment that we need in order to gauge the effect of genes on traits. That is, it didn’t take random samples of people from different geographic locations and different cultures and assign them the same genetic variation,, so that a statistician could neatly separate the effect of genes from that of location or culture.

If I understand Conley correctly, he suggests looking at genetic variation within families. I am not sure what advantage that has that is not outweighed by the disadvantage that you reduce the likely range of genetic combinations that you can observe.

Road to sociology watch

Dani Rodrik writes,

The new face of the discipline was on display when the AEA convened for its annual meetings in San Diego in early January. There were plenty of panels of the usual type on topics such as monetary policy, regulation, and economic growth. But there was an unmistakably different flavor to the proceedings this year. The sessions that put their mark on the proceedings and attracted the greatest attention were those that pushed the profession in new directions. There were more than a dozen sessions focusing on gender and diversity, including the headline Richard T. Ely lecture delivered by the University of Chicago’s Marianne Bertrand.

Woody Allen once worried about what you would get if you combine the head of a crab with the body of a social worker. I worry about what you get if you combine the scientific hubris of an economist with the ideology of a sociologist. Maybe this:

The AEA meetings took place against the backdrop of the publication of Anne Case and Angus Deaton’s remarkable and poignant book Deaths of Despair, which was presented during a special panel. Case and Deaton’s research shows how a particular set of economic ideas privileging the “free market,” along with an obsession with material indicators such as aggregate productivity and GDP, have fueled an epidemic of suicide, drug overdose, and alcoholism among America’s working class. Capitalism is no longer delivering, and economics is, at the very least, complicit.

Actually, the book has a publication date of March 17, but I guess it is now fair game to discuss the review copy I received. I think that their analysis is flawed in important respects. I’ll link to my review when it appears.

What is the true margin of error?

Alex Tabarrok writes,

The logic of random sampling implies that you only need a small sample to learn a lot about a big population and if the population is much bigger you only need a slightly larger sample. For example, you only need a slightly larger random sample to learn about the Chinese population than about the US population. When the sample is biased, however, then not only do you need a much larger sample you need it to large relative to the total population.

I am curious what Tabarrok means in the first sentence by “need a slightly larger sample.” I thought that with random sampling, the margin of error for a sample of 1,000 is the same whether you are sampling from a population of 10 million or 50 million.

But the issue at hand is how a small bias in a sample can affect the margin of error. We frequently see election results that are outside the stated margin of error of exit polls. As I recall, in 2004 conspiracy theorists who believed the polls claimed that there was cheating in the counting of actual votes. But what is more likely is that polling fails to obtain a true random sample. This greatly magnifies the margin of error.

In real-world statistical work, obtaining unbiased samples is very difficult. That means that the true margin of error is often much higher than what gets reported.

Null Hypothesis watch

In 1987, Peter Rossi wrote,

The Iron Law of Evaluation: “The expected value of any net impact assessment of any large scale social program is zero.”

The Iron Law arises from the experience that few impact assessments of large scale2 social programs have found that the programs in question had any net impact. The law also means that, based on the evaluation efforts of the last twenty years, the best a priori estimate of the net impact assessment of any program is zero, i.e., that the program will have no effect.

The Stainless Steel Law of Evaluation: “The better designed the impact assessment of a social program, the more likely is the resulting estimate of net impact to be zero.”

This law means that the more technically rigorous the net impact assessment, the more likely are its results to be zero—or not effect. Specifically, this law implies that estimating net impacts through randomized controlled experiments, the avowedly best approach to estimating net impacts, is more likely to show zero effects than other less rigorous approaches. [pg5]

The Brass Law of Evaluation: “The more social programs are designed to change individuals, the more likely the net impact of the program will be zero.”

This law means that social programs designed to rehabilitate individuals by changing them in some way or another are more likely to fail. The Brass Law may appear to be redundant since all programs, including those designed to deal with individuals, are covered by the Iron Law. This redundancy is intended to emphasize the especially difficult task in designing and implementing effective programs that are designed to rehabilitate individuals.

I arrived at this by following Tyler Cowen’s recommendation to check out Gwern and starting to read the latter’s essay on why correlation is so frequent and causation is so rare.

My comments on the Rossi article.

1. James Manzi had very similar thoughts in Uncontrolled. Is that correlation or causation? Concerning the “brass law,” Manzi said that you are more likely to effect change by taking people’s nature as given and changing their incentives.

2. Imagine how much more often we would see these sorts of results if it were not for social desirability bias in reporting on interventions.

Stagnation in research

Les Coleman offers evidence and proposes solutions. Note this:

An excellent example is provided by American Economic Review, which is its discipline’s premier journal. AER celebrated its centenary in 2011 by commissioning a distinguished panel of researchers to choose the top 20 most “admirable and important articles” that the journal had published. This presumably would list economics’ most innovative and influential thinking. So it is staggering that the most recent of these articles dates to 1981: the leading economists of our day think it is almost 30 years since the pre-eminent AER published an important idea!

For solutions, Coleman suggests that

Funders should develop a detailed code of research practice; and—without affecting researchers’ independence—prioritise research that solves puzzles in paradigms and enables forecasts of important phenomena.

. . .universities should restrict funding to publishers with robust integrity programs, and preference journals which promote research quality. Publishers should centralise integrity checks of all submissions and send quality papers for review by qualified peers chosen at random. Journals should require self-replication of research so that innovative findings are confirmed in a totally independent setting. They should also adopt a multidisciplinary perspective, and encourage review articles which critically evaluate the prevailing paradigm with summaries including strengths and weaknesses and provide informed commentary on developments in research and its strategy.

I think that we should instead regard the peer-reviewed journal as an institution that is beyond reform. A whole new institution needs to be developed in the Internet age.