I must address myself to the underlying fundamental developmental deficits that impede the ability of African Americans to compete. If, instead of doing so, I use preferential selection criteria to cover for the consequences of the historical failure to develop African American performance fully, then I will have fake equality. I will have headcount equality. I will have my-ass-is-covered-if-I’m-the-institution equality. But I won’t have real equality.
I recommend the entire interview.
Meanwhile, Lilah Burke reports,
In 2013, the University of Texas at Austin’s computer science department began using a machine-learning system called GRADE to help make decisions about who gets into its Ph.D. program — and who doesn’t. This year, the department abandoned it.
Before the announcement, which the department released in the form of a tweet reply, few had even heard of the program. Now, its critics — concerned about diversity, equity and fairness in admissions — say it should never have been used in the first place.
The article does not describe GRADE well enough for me to say whether or not it was a good system. For me, the key question is how well it predicts student performance in computer science.
I draw the analogy with credit scoring. If a credit scoring system correctly separates borrowers who are likely to repay loans from borrowers who are likely to default, and its predictions for black applicants are accurate, then it is not racially discriminatory, regardless of whether the proportion of good scores among blacks is the same as that among whites or not.
David Arnold and co-authors find that
Estimates from New York City show that a sophisticated machine learning algorithm discriminates against Black defendants, even though defendant race and ethnicity are not included in the training data. The algorithm recommends releasing white defendants before trial at an 8 percentage point (11 percent) higher rate than Black defendants with identical potential for pretrial misconduct, with this unwarranted disparity explaining 77 percent of the observed racial disparity in algorithmic recommendations. We find a similar level of algorithmic discrimination with regression-based recommendations, using a model inspired by a widely used pretrial risk assessment tool.
That does seem like a bad algorithm. On the face of it, the authors believe that they have a better model for predicting pretrial misconduct than that used by the city’s algorithm. The city should be using the authors’ model, not the algorithm that they actually chose.
I take Loury as saying that intervening for racial equality late in life, at the stage where you are filling positions in the work place or on a college campus, is wrong, especially if you are lowering standards in order to do so. Instead, you have to do the harder work of improving the human capital of the black population much earlier in their lives.
It seems to me that Loury’s warning about the harms of affirmative action is being swamped these days by a tsunami of racialist ideology. Consider the way that a major Jewish movement seeks to switch religions.
In order to work toward racial equality through anti-racism, we must become aware of the many facets of racial inequality created by racism in the world around us and learn how to choose to intervene. Join us as we explore:
– How race impacts our own and each others’ experiences of the world
– The choice as bystander to intervene or overlook racist behavior
– How to be an anti-racist upstander
There is more of this dreck at the link.
I foresee considerable damage coming from this. Institutions and professions where I want to see rigor and a culture of excellence are being degraded. Yascha Mounk, who doesn’t think of himself as a right-wing crank, recently wrote Why I’m Losing Trust in the Institutions.
Finally, this seems like as good a post as any to link to an essay from last June by John McWhorter on the statistical evidence concerning police killings.