A couple of people were interested in having my try this experiment. So I did it. It took more effort than I expected. The intended audience would be young people interested in teaching themselves economics.
Category Archives: Economic education and methods
Tyler Cowen and Paul Krugman
Forget your priors and lower your guard. This is about as good a conversation as you will get among economists. Early in the discussion Tyler suggests that this sort of conversation could be more educational than a conventional economic lecture, and I think that is true. I think if I were teaching from it, I would pause every few minutes to explain to students what is going on, and also perhaps to explain my own point of view where it is not expressed by either of the speakers.
About 28 or 29 minutes in, Krugman makes the point that some really major industries do not conform to textbook economic models, and he raises the question of why we then rely so much on the textbook model. I strongly agree, and in fact I have drafted a long essay about the implications. I wish they would have spent more time on this topic,abut perhaps they exhausted everything they could say.
Scott Alexander on the Representative Agent model
Suppose that one-third of patients have some gene that makes them respond to Prozac with an effect size of 1.0 (very large and impressive), and nobody else responds. In a randomized controlled trial of Prozac, the average effect size will show up as 0.33 (one-third of patients get effect size of 1, two-thirds get effect size of 0).
Economists instinctively fall back on the “representative agent” model, in which you average the population results of whatever study you do. So an economist would say that the effect size is 0.33. But the point is that there is not one parameter that represents the whole population. One needs to take into account differences.
Where this bothers me the most is in the realm of expectations. Someone will take a survey of, say, consumer expectations for home price increases. The results will diverge across consumers. But the economist will report a single number for consumer expectations.
Robert Plomin talks his book
In the WSJ, Robert Plomin writes,
DNA is the major systematic influence making us who we are as individuals. Environmental influences are important too, but what look like systematic effects of the environment are often genetic effects in disguise: Parents respond to their children’s genetically driven traits, and children seek, modify and even create experiences correlated with their genetic propensities.
His book is Blueprint, which I just finished. His thesis:
DNA is the only thing that makes a substantial systematic difference, accounting for 50 percent of the variance in psychological traits. The rest comes down to chance environmental experiences that do not have long-term effects.
What he calls “chance environmental experiences” could be measurement error. Measurement error always holds down correlation. This raises the possibility that some traits that are measured with error are more heritable than they appear. For example, Gregory Clark found that social status is much more heritable across many generations than would be expected based on parent-child heritability estimates. I explained that this is likely due to error in measurement in social status, which lowers immediate-generation correlation more than multi-generation correlation.
Educational interventions are apparent environmental influences that wear off over time. You raise a test score but do not fundamentally alter ability. That is an element of what I call the Null Hypothesis, which Plomin strongly endorses, although of course he does not use that term. Related: Scott Alexander on pre-school.
This is one of the most important books of the year. Coincidentally, the NYT has an article on economists’ use of polygenic scores. Tyler and Alex both linked to it.
But you should know that I came away from Plomin’s book less than impressed with polygenic scoring. So much data mining. So little predictive value. Also, there is serious criticism of his view that environmental factors exhibit no systematic influence, but he does not confront it. I did a search inside the Kindle edition for “Flynn” and found no results.
Tyler Cowen, NSF, and DARPA
Let’s start with some possible institutional failures in mainstream philanthropy. Many foundations have large staffs, and so a proposal must go through several layers of approval before it can receive support or even reach the desk of the final decision-maker. Too many vetoes are possible, which means relatively conservative, consensus-oriented proposals emerge at the end of the process. Furthermore, each layer of approval is enmeshed in an agency game, further cementing the conservatism. It is not usually career-enhancing to advance a risky or controversial proposal to one’s superiors.
This also describes the National Science Foundation. You can see how an institution like this would be biased toward funding mainstream incumbents rather than innovative, heterodox projects. It’s fine to have a lot of research money go through this model, but you also want some alternative funding mechanisms in order to have a healthy ecosystem.
Think of DARPA in its heyday. The approval process had fewer layers. Choices were more idiosyncratic.
I think where DARPA succeeded was when it had two other elements. One was a vision, in particular Licklider’s vision for computing. The other was a network of creative people. Licklider knew where to find the groups that could move his vision forward.
In his Emergent Ventures initiative, it seems to me that Cowen is not relying on his network. And I don’t see a guiding vision. It is more scattershot. That may be a valid model. But I prefer the DARPA model.
If I had the money to dole out, I would do so based on overall vision. One vision is for “rules and norms for competitive governance.” The idea would be to develop the legal framework that would allow people living side by side, in existing locations (not seasteads or charter cities), to have more choice in government services and policies. The widely-unread Unchecked and Unbalanced includes more of my thoughts about that. Of course, some of you are thinking, “Go back to the founding fathers,” but it’s not as simple as that. The founding fathers did not provide for a society in which the preponderance of people, and an even bigger preponderance of economic activity, could be found in large cities.
The other vision I have concerns economic research. I would promote an agenda that I call disaggregating the economy.
But for neither of these visions do I have anything resembling a network.
Selection effects
A few weeks ago, Handle wrote,
The major problem with any mechanism that lets good people evade government control for good reasons, is that it lets bad people evade government control for bad reasons.
I have been thinking recently about which economic concepts are over-rated and which are under-rated. In general, I think that over-rated concepts fit with our intuition of small-scale society, and under-rated concepts deal with large-scale society. Thus, one under-rated concept is “selection effect.”
In a small scale society, you don’t have to worry about selection effects. You know everybody and you have repeated interactions with everybody.
In a large scale society, you don’t know the people with whom you transact. You apply rules, and those rules will, for better or worse, be attractive to people who like those rules when compared with other rules.
So, going back to the context of the comment, if you offer people the ability to make large financial transfers without being monitored by any government agency, you will attract people who don’t want that monitoring. Some of those people will be good people who are just annoyed by monitoring, but you are going to draw all the people whose motives for avoiding monitoring are not so good. You are going to select for criminals.
As another example, take mortgage origination rules that require the applicant to document income, employment, and assets. The rules are in some respects pretty inefficient. For any given set of mortgage applicants, the documents themselves add essentially no information for predicting default risk.
But when you change the rules to allow “no-doc” loans, you draw in a different pool of potential borrowers. You get the applicants who are not so conscientious and reliable. You get loans from mortgage brokers who do not have a problem coaching applicants to over-state what they earn or what they have in the bank. So even though their credit scores look ok, you are going to select for borrowers who are less conscientious from a pipeline of mortgage brokers who are less honest.
To take a more provocative example, consider the “____ studies” fields in academia. Even if they don’t explicitly require professors to have left-wing ideas, they select for such professors by making uncomfortable anyone with a different point of view. In other fields, this is less the case. But I fear that in those other fields, any lack of diversity along gender or racial lines will be used as a wedge to make them to come up with selection criteria that have the effect of pulling in people with a left-wing viewpoint. In economics, I call this the “road to sociology watch.”
Revisiting the Hidden Tribes poll
Several commenters did not like the poll, and a reader suggested that I try the Hidden Tribes quiz. Ugh! What a terrible survey instrument.
I would like to believe that there is a large portion of the population that is tired of hyper-partisanship. But if there is such a majority out there, this poll is not a credible way to find it.
I would trust a survey based on my three-axes model more than I would trust the Hidden Tribes report. If the general public is more centrist or nuanced, that would show up as a lot of people not consisting aligning with any one axis.
The Diss Card Pile
The Economist (warning: their site has lots of scripts* and is likely to crash your browser) writes,
Harvard’s lawyers hired David Card, a prominent labour economist at the University of California, Berkeley. His model includes factors like the quality of a candidate’s high school, parents’ occupations and the disputed personal rating. Under these controls, Mr Card claims that Asian-American applicants are not disadvantaged compared with whites. But given that these factors are themselves correlated with race, Mr Card’s argument is statistically rather like saying that once you correct for racial bias, Harvard is not racially biased.
Pointer from Tyler Cowen. The previous day, Tyler simply said Card is wrong.
I know of three works by Card. One is his paper, with Krueger, claiming that a higher minimum wage raised employment in an area. The criticisms of that paper are persuasive. The second is a paper claiming that college attendance helps people from poor families, controlling for ability. As I wrote in this paper (see the appendix), what he claimed was an instrumental variable (meaning it should have no independent correlation with the dependent variable) was anything but. The third is this latest piece of arrogantly-expressed unpersuasive analysis.
Card was awarded the Clark Medal, which is on par with a Nobel Prize. His body of work is enormous, and perhaps I have encountered the only three times he has been wrong. Perhaps he is only untrustworthy when he wades into a politically sensitive topic. But if you are looking for an economist’s work to examine to see how well it replicates, I have a name for you.
*All media sites do this, but The Economist really goes over the top. Just once I would like to see a major media site that does not invite you to “get notifications” and such. They are all apparently listening to the same Internet consultant, who is an idiot. If they want to listen to someone, they should listen to me. I proposed a better model almost twenty years ago. I knew they would resist it for a while, but I never thought it would be for this long.
Will population growth rebound?
Jason Collins and Lionel Page write,
The United Nations produces forecasts of fertility and world population every two years. As part of these forecasts, they model fertility levels in post-demographic transition countries as tending toward a long-term mean, leading to forecasts of flat or declining population in these countries. We substitute this assumption of constant long-term fertility with a dynamic model, theoretically founded in evolutionary biology, with heritable fertility. Rather than stabilizing around a long-term level for post-demographic transition countries, fertility tends to increase as children from larger families represent a larger share of the population and partly share their parents’ trait of having more offspring. Our results suggest that world population will grow larger in the future than currently anticipated.
Collins is humble about the ability of any model to project fertility, given the importance of cultural evolution. I have not seen the paper, but I would like to know whether they tested their model against actual data in any way. For example, you could “backcast” the model and see how well it “predicts” population in, say, 1980 or 1950.
P(Bayesian) = ?
I asked readers to estimate their probability that Judge Kavanaugh was guilty of sexually assaulting Dr. Ford. I got 2,350 responses (thank you, you are great). Here was the overall distribution of probabilities.
1. A classical statistician would have refused to answer this question. In classical statistics, he is either guilty or he is not. A probability statement is nonsense. For a Bayesian, it represents a “degree of belief” or something like that. Everyone who answered the poll (I did not even see it, so I did not answer) either is a Bayesian or consented to act like one.
2. A classical statistician could say something like, “If he is innocent, then the probability that all of the data would have come in as we observed it is low, therefore I believe he is guilty.”
3. For me, the most telling data is that he came out early and emphatically with his denial. This risked having someone corroborate the accusation, which would have irreparably ruined his career. If he did it, it was much safer to own it than to attempt to get away with lying about it. If he lied, chances are he would be caught–at some point, someone would corroborate her story. The fact that he took that risk, along with the fact that there was no corroboration, even from her friend, suggests to me that he is innocent.
4. But that could very well be motivate reasoning on my part, because I was in favor of his confirmation in the first place. By far, the biggest determinant of whether you believe he is guilty or not is whether or not you wanted to see him confirmed before the accusation became public. See Alexander’s third chart, which shows that Republicans overwhelmingly place a high probability on his innocence and Democrats overwhelmingly place a high probability on his guilt. That is consistent with other polls, and we should find it quite significant, and also depressing.