scorecardresearch Skip to main content

This week in ‘Say More’: The computer scientist who became an AI skeptic

When she was a graduate student at MIT, Joy Buolamwini noticed that face-tracking software couldn’t register her face. That led to many more questions about racial blind spots and bias in AI software.

AI decoded: The race problem
WATCH: The software has racial blind spots and biases. Brian Bergstein, guest host of the Globe’s ‘Say More’ podcast, explores the dangerous consequences.

About eight years ago, when she was a graduate student in the MIT Media Lab, computer scientist Joy Buolamwini, who is Black, ran some standard face-tracking software on her computer and noticed that it couldn’t register her face unless she put on a white mask.

What was going on?

Figuring out the answer to that question was her first step to becoming an AI activist — the founder of the Algorithmic Justice League and a “poet of code” who incorporates art into her technological critique.

Buolamwini describes her journey on the latest episode of “Say More,” the Globe Opinion podcast on which I’m the guest host this week. Among the subjects we discuss:


  • How the problems with facial recognition software that she has warned about for years are still cropping up in systems used by law enforcement — leading to wrongful arrests.
  • How the latest generative AI systems have some of the same blind spots and perpetuate biases despite being trained on vastly more data.
  • What she told President Biden when she was invited to advise him on AI regulation.

Buolamwini also has a new book out — “Unmasking AI: My Mission to Protect What is Human in A World of Machines” — and we’ve published an excerpt in Globe Ideas.

You can find all episodes from Say More at, as well as on Apple, Spotify, or wherever you listen to podcasts. You can send questions or ideas for episodes to And if you like the show, please follow it and write a review.

Brian Bergstein is the deputy managing editor of Ideas. He can be reached at