hiawatha bray | tech lab

The software that runs our lives can be bigoted and unfair. But we can fix it

New York City Councilor James Vacca isn’t a fan of “Star Trek,” but I bet Captain Kirk would be a fan of James Vacca.

It was mere science fiction back in 1967 when an episode of Star Trek featured Kirk being court-martialed and nearly convicted because a computer falsely determined he was a liar. These days, real-world judges and parole boards use computers to help decide whether a suspect should get bail, or whether to let a convict back on the street.

They’re called “automated decision systems” and are in wide use — not just in business to issue credit cards or mortgage loans, but also by government agencies, which use software algorithms to decide which high school a child will attend, how many police to assign to a particular neighborhood, whose tax return to audit, or which immigrants get a visa.


But what if the computer relied on data that was tilted against certain races or nations of origin? How would you know the computer’s decision was unfair? Even if you suspected it, how do you cross-examine a computer?

Get Talking Points in your inbox:
An afternoon recap of the day’s most important business news, delivered weekdays.
Thank you for signing up! Sign up for more newsletters here

Vacca proposes the next-best thing: a sweeping citywide audit of automated decision systems in use by New York City government, aimed at detecting flaws in the technology before they ruin someone’s life.

“This is uncharted territory we don’t know enough about,” Vacca said. He believes the plan, passed by the City Council earlier in December, will make the city the first government entity in the United States to comprehensively study how it uses these systems. The ultimate goal is to purge the programs of what computer scientists call “algorithmic bias.”

A computer’s decisions can be tainted by prejudice, just like those of humans. Programmers can make careless assumptions that aren’t intentionally prejudiced but can produce biased outcomes. Also, these artificial intelligence systems learn how to make decisions by analyzing huge databases of human activity. If the data going in are biased, then expect the results coming out to be biased.

For example, a computer system widely used by judges around the country to set bail or determine prison sentences, called COMPAS, was found to more likely identify black people as high-risk suspects than white people, while whites were more likely to be mistakenly marked as low-risk, according to an investigation by the nonprofit news organization ProPublica in 2016.


Computer scientists have been fretting about algorithmic bias for years. In 2013, I wrote about Latanya Sweeney, the African-American computer scientist who learned that searches on Google of “black-sounding” names like her own routinely triggered accompanying ads for criminal background-checking services — ads that rarely appeared when “white-sounding” names like Chad or Caitlin were searched.

Why? Probably at some point there were more criminal background checks done on black-sounding names than white-sounding ones. Over time, Google tailors its search results to deliver what it thinks the public wants. So the skewed results can reflect our biases, not those of some Google engineers.

In 2016, when Amazon refused to offer free same-day deliveries to mostly black neighborhoods in Boston and other US cities, the company said race had nothing to do with its decision. Amazon used economic data by ZIP code to determine where to offer same-day delivery, and wrote off low-income neighborhoods that had few Amazon Prime members.

It may not have intended to offend, but that didn’t spare Amazon from the firestorm that followed.

The Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, is used by a number of state and county governments for a host of criminal justice operations. In its investigation, ProPublica found that judges who used COMPAS might be more likely to deny bail to a relatively harmless black suspect but grant it to a white defendant who is a higher risk.


The maker of the software, a company called Northpointe, strongly disputed ProPublica’s findings. But its underlying source code is not available for public inspection, so critics say the program is like a witness who can’t be questioned by the defendant.

A convicted felon in Wisconsin whose sentence was partly based on a COMPAS report made just that argument to that state’s Supreme Court last year, but lost. The US Supreme Court refused to hear his appeal, but with many states using COMPAS and similar programs, the issue is likely to end up at the high court sooner or later.

Cleaning up these programs’ source code isn’t enough. “It doesn’t tell you anything about the biases built into the training data,” said Joel Reidenberg, a founding director of the Center on Law and Information Policy at Fordham University. If a sentencing program is trained on data that tends to show defendants from majority-black neighborhoods are more likely to skip bail than those from majority-white areas, you can expect its recommendations to be racially slanted.

And that rips open another can of worms: What if the racially slanted prediction is accurate?

To avoid prejudiced results, programs like COMPAS don’t track race at all. But they do track data that can serve as proxies for race — income and neighborhood data, for example.

What if the software is correct in predicting that suspects from a majority-black neighborhood are more likely to skip bail? Do we deny bail to a disproportionately high number of black suspects? Or do we switch off the computer or tweak the software to make the results more politically palatable?

Amazon confronted a similar problem when it denied same-day delivery to low-income black neighborhoods. The math may have been absolutely right: Maybe it doesn’t pay to offer it in neighborhoods with few Prime customers that happen to be majority black. But the company’s decision was still a bitter insult to thousands of potential customers. So Amazon backed down.

In the same way, our worries about algorithmic bias might lead us to discard sentencing software. But that’s no solution; human judgment may be just as prejudiced — or even more.

“While I think it’s really tempting to seek out a silver-bullet solution to cure bias, it’s not going to work,” Kate Crawford, a data scientist at Microsoft Corp., warned in a recent speech at a conference in Spain. “We can only gather data about the world that we have, which has a long history of discrimination. So the default tendency of these systems will be to reflect our darkest biases.”

It turns out that algorithmic bias isn’t just about algorithms. It’s a social and philosophical problem, not just a technical one. Experts from law and the social sciences will have as much to say about it as the engineers.

It took a shrewd lawyer, as well as Mr. Spock, to get Captain Kirk off the hook. The rest of us will have to count on elected officials such as James Vacca, who realize the computers that run our police departments, courts, and schools ought to be at least as accountable as he is.

Hiawatha Bray can be reached at Follow him on Twitter @GlobeTechLab.