scorecardresearch Skip to main content
Opinion | Jens Ludwig and Cass R. Sunstein

Discrimination in the age of algorithms

Globe Staff; Adobe

Are algorithms a threat to justice? Will they increase discrimination on the basis of race and sex?

Many people think so. According to a recent survey, nearly three in five Americans believe algorithms make bias inevitable. A majority opposes using algorithms in areas like criminal justice, hiring, and credit scoring.

But if the goal is to reduce discrimination, the majority is wrong. There is a good chance that getting rid of algorithms altogether, and even significantly reducing their use, would lead to more, not less, discrimination.

After all, the alternative is human judgment, and people discriminate. Compared to people, algorithms can reduce bias for three reasons.

Advertisement



First, algorithms are much better than human beings at solving many problems that disproportionately affect members of disadvantaged groups.

Consider criminal justice. Judges must decide if defendants await trial at home or in jail — on the basis of predicting whether they are likely to flee or commit crimes. For judges, this task requires probabilistic thinking of the sort that behavioral science tells us is very difficult for everyone, and that might well be infected by racial and other biases.

How do human judges do? Not so well. The evidence shows that judges make a lot of mistakes in predicting flight risk and inadvertently detain many low-risk people (and release some high-risk ones). As a result, they detain far more people than needed to achieve a given reduction in crime.

Algorithms make more accurate predictions. For that reason, they provide opportunities to limit jail just to those who are truly high-risk, with no increase in crime — and to get a lot of minority group members out of prison. In New York City, for example, use of algorithms would reduce the jail population by 40 percent.

And who would benefit most? The African-American and Latino communities from which currently nearly 9 of 10 jail inmates are drawn.

Advertisement



Algorithms also have the potential to reduce discrimination by improving our ability to detect it.

When human beings discriminate, it’s often hard for the legal system to find a smoking gun. Employers might be less likely to hire African-Americans or women, but they might also be careful not to write or say anything that could be used against them in court.

There’s another problem. Research shows how little we often know about our own thinking. Even well-intentioned people may possess unconscious or implicit biases on the basis of race or gender.

In contrast, algorithms of the sort used in criminal justice or hiring require writing a piece of code that specifies what data are used and what the objective is. These are tangible objects that can be inspected.

We can now answer, for the first time, counterfactual questions like, “If this candidate were a man, would she have been hired?” This is massive progress — a game changer for detecting bias.

This leads to the third reason why algorithms can help reduce discrimination: Once it has been detected, algorithms let us implement and scale solutions in ways that would be impossible with human beings.

Suppose, for example, that an employer learns that its hiring decisions have been infected by a bias against women. The boss could issue a firm directive: Stop discriminating! That might work, but exhortations often fail, even when they come from the top.

Advertisement



By contrast, the code for a hiring algorithm could be written so as to prevent biased outcomes. It could be strictly forbidden to take account of race and sex. And instead of predicting which applicants human beings would have hired, the algorithm could be asked to predict actual performance on the job.

These points should not be misunderstood. Algorithms are produced by people, and people discriminate. If an algorithm gives weight to credit scores, it might turn out to be biased, at least if those scores are themselves a product of bias.

There is no guarantee that algorithms will reduce rather than increase bias, as high-profile examples of discriminatory algorithms make abundantly clear. So it becomes critically important to get the right regulations and laws in place to deal with the new issues that algorithms create.

But it does society no service to blame algorithms as a whole, rather than a set of specific algorithms, in light of their immense potential to reduce discrimination on the basis of race, sex, and other grounds. If we want to eliminate discrimination, we will ultimately end up relying more, not less, on algorithms.


Jens Ludwig is a professor at the University of Chicago and director of the University of Chicago Crime Lab. He can be reached at jludwig@uchicago.edu. Cass R. Sunstein is a professor at Harvard Law School. He can be reached at csunstei@law.harvard.edu.