scorecardresearch Skip to main content
IDEAS

Q&A: What’s the point of automated gender recognition software?

A growing number of activists say there’s no place for technology that guesses whether a face is male or female.

Os Keyes, a researcher and writer at the University of Washington, speaks at a conference in 2019.Pierre-Selim Huard/Wikimedia Commons CC BY 4.0

Spotify, the world’s largest music streaming service, offers customers more than they’re looking for. The company analyzes what users listen to and like, and its recommendation software suggests related music. Recently, Spotify acquired a patent for new technology designed to learn more about us and our tastes by studying our speech. One capability of this technology is to guess a person’s gender by analyzing voice data.

In response to criticism of the patent, Spotify said it hasn’t actually used the technology and has no plans to start. But several groups want a firmer commitment. Access Now, a digital civil rights group, argued in a letter to Spotify that individuals should be free to determine their own gender and that such software would discriminate against trans and nonbinary people. And a coalition of musicians is urging Spotify to affirm that it will never “use, license, sell, or monetize” the technology, calling it “a violation of privacy and other human rights.”

Advertisement



Nonetheless, gender recognition technology already exists. Tech giants like Microsoft and Amazon include the feature in commercial facial recognition systems.

Os Keyes, an artificial-intelligence researcher at the University of Washington, is trans and attuned to the dangers of overly reductive representations of our identities. Having long criticized automated gender recognition, Keyes is now making a case for banning the technology. This interview has been edited and condensed.

Where are automated gender recognition systems being used? And what benefits do advocates say they provide?

The neat summary is “advertising, analytics, and access.”

Companies are increasingly designing and deploying digital billboards that can dynamically alter what they display. As an example, think of the big ones at Times Square made up of lots of LCD screens. There are proposals to show different ads to different people based on who the people are. One work I read genuinely used the example of advertising “cars to men and pretty dresses to women.”

Advertisement



Just as companies like granular advertising, they also like knowing who they should be marketing to. “Did any women come to this cinema showing? If not, we should change our programming.” “What percentage of customers to our clothing shop are men? If it’s low, we should make different clothes.” Or, more grimly, “Did anyone who isn’t recognized as a woman try to go to this bathroom? If so, call security.”

Finally, it’s touted as making facial recognition systems more efficient. If you have facial recognition at a border, or a licensing office, or an employment app, you have to compare the face of the user to potentially millions of faces in a database. That’s time-consuming and inefficient. Far more efficient (so the thinking goes) is being able to determine the user’s gender and so automatically ignore 50 percent of your database. Half the comparisons, double the speed!

What assumptions enable a system to infer gender from physical and behavioral data? And why are these suppositions flawed?

The answer to both is: It assumes physical appearance equates to gender. More broadly, it assumes that physical appearance equates to gender in the same ways, everywhere, for everyone. This has always been false, and false in a way that infringes on personal autonomy. It means people who don’t adhere to a very, very narrow range of norms get treated as invalid and as needing correction. And when I say narrow, I mean narrow. Some of these algorithms treat hair length as a marker of gender, resulting in men with long hair (or women with short hair), being treated as errors. So in practice, these norms end up penalizing very particular people.

Advertisement



Even if the norms were broader, however, for many trans and/or nonbinary people, fitting into generic ideas of gender is not something we have any interest in doing, and may not be something we can do. We cannot adapt to the system and should not be expected to do so. In the absence of the system changing, that means being treated as walking anomalies.

What harms can gender recognition software cause?

In the case of advertising and analytics, it pretty much guarantees constant, automated reminders to people it misclassifies that they don’t fit. Imagine walking through a mall and having the wall advertisements adjusting themselves to present you with inappropriate ads that misidentify what type of a person you are. If you pitched that as a dystopian sci-fi film, it’d be rejected for being too on the nose!

More serious issues happen with the authentication bit. If the software misclassifies you, then the system is pretty much guaranteed to decide you are not in the database, even if you are. When companies are using facial recognition to govern access to employment apps or housing, you could find yourself literally locked out of your job or home. This may already have happened. We know that Uber’s facial recognition algorithms disproportionately locked out trans women, and from the outside it’s hard to know if that was due to the system looking for stereotypical gendered cues in the driver’s appearance.

Advertisement



Why do you want gender recognition software to be banned instead of made more inclusive?

Because the idea of trans-inclusively automatically assigning or inferring gender based on what someone looks like is a contradiction in terms. You simply can’t make it more inclusive. The problem is not that the answer is not good enough, but that the question makes absolutely no sense.

But also, let’s turn that around. Given that we can see quite a few harms of these systems — harms that cannot be addressed — why do we want them, full stop? The problems that they claim to address are not big problems. The issues already have solutions. We don’t need gender recognition for billboards when we have customer choice of where to shop. We don’t need gender recognition for analytics when we have surveys. And we don’t need gender recognition for facial recognition efficiency because, well, we don’t need facial recognition. Gender recognition is a solution in search of a problem — one that causes many more in undertaking that search.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology and an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity. Follow him on Twitter @evanselinger.

Advertisement