Kate Crawford is a leading artificial intelligence researcher who makes a startling claim: AI is neither artificial nor intelligent.
Both of those misperceptions are important, she argues, because they lead people to overlook or minimize AI’s social costs.
If we think of AI as artificial because it’s just software, Crawford contends, we too easily miss the concrete effects it has on, say, business supply chains and in workplaces. She points to the physical toll on the people who work in Amazon’s heavily roboticized fulfillment centers. When Crawford visited one, she observed many workers relying on physical supports: bandages on their elbows, guards around their wrists, and braces on their knees. Using AI to streamline labor might be resulting in people working so hard to be maximally efficient that their bodies are breaking down. Now Amazon says it will adjust its algorithmic-management system to reduce the rate of injuries.
If we define AI as intelligent, we ignore the human biases and power structures that influence how it’s made and deployed. Applications such as “affect recognition” software are being used to assess the emotions or personal characteristics of students and criminal suspects even though the science underlying it is “at best incomplete and at worst misleading,” says Crawford, who is a research professor at the USC Annenberg School for Communication and Journalism and a senior principal researcher at Microsoft Research.
Advertisement
I interviewed Crawford about her new book, “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” This conversation has been edited and condensed.

A race is on to benefit from AI as quickly as possible. Yet you argue there’s value in slowing down and better understanding its risks and costs. What mistakes are people making with AI now?
A consistent mistake is assuming the current problems with discrimination and bias in AI are technical problems that can be fixed by removing problematic data or modifying algorithms. There is a deeper issue at work here: AI systems seek to automate labels for gender, race, personality, and creditworthiness. A proxy will stand in for the real; a toy model will be asked to substitute for the infinite complexity of human subjectivity.
Advertisement
What are some examples of software miscategorizing people as a result of these erroneous assumptions about who they are and what they’re capable of?
Job interviews use software based on really questionable scientific foundations. The algorithms assess micro-expressions on your face to decide whether or not you’ll be a good employee. Then there’s the more hidden substrate of how AI is trained to see and categorize people. The training sets are, in many ways, like the DNA building blocks of AI. Many grab images off the Internet. Some have classified people into a binary conception of gender or a five-part classification of race that hearkens back to disturbing histories, like apartheid in South Africa. Some even categorize people according to morality or character. I’m thinking of ImageNet, one of the most famous benchmark training sets in the history of computer vision. It had incredibly disturbing categories for people, from “bad person” or “kleptomaniac” to terms that are far too racist and misogynistic to print in a newspaper.
Is it hard to get data scientists and software engineers to see the dangers of the models you just described?
Advertisement
You know, strangely, I think many people in technical domains are open to having these conversations. The difficulty, however, is moving beyond the idea that there is a simple tech fix. What my research has taught me over many years is that these are socio-technical issues. This means that we need people from very different disciplines and very different backgrounds addressing how technical systems will play a role in significant social institutions like health care, criminal justice, and education. Computer science has to grapple with the fact that it is a core social discipline that is doing forms of social engineering.
Suppose people take your advice and adopt a more sophisticated view of AI. How might the shift in perspective lead to better outcomes?
If AI currently serves existing structures of power, the question is whether it can be “democratized.” Could there be forms of AI that are reoriented toward justice and equality rather than industrial extraction and discrimination? It’s an appealing idea, but the infrastructures and forms of power that enable and are enabled by AI skew strongly toward the centralization of control. The first shift would be acknowledging that technology should not be the central actor for building a more just and sustainable world. Next is to fundamentally question the idea of technological inevitability: to understand where AI should be used and where it should be refused.
Please elaborate. How can AI be used to prioritize human well-being and environmental sustainability?
Advertisement
If we look at the way many of these issues have been addressed, there is an assumption that technology inevitably will continue to develop and all we can do is carve out small spaces of protection for privacy, ethics, and greater transparency. But what if we reverse that picture and start instead with the question of what kind of world we want to live in and then ask, “How does technology serve that vision rather than drive it?”
What are good strategies for rejecting AI when it’s inappropriate to use? After all, many see AI dominance as the key to being globally competitive.
The rhetoric around artificial intelligence is that we are in an AI war. There’s a commonly repeated refrain that if the United States rejects some forms of AI, then China will simply adopt them. This creates a race to the bottom, where any and all techniques are acceptable, no matter how invasive or predatory. Instead, we need strong regulatory frameworks to protect against systems that erode civil society, amplify inequalities, and make everyone subject to constant granular surveillance and scoring. Some communities are directly rejecting AI systems when they feel these are harming them: from city-based facial recognition bans to students protesting against the use of AI-driven remote proctoring. I’m inspired to see the emergence of a politics of refusal. But it’s up against considerable corporate and government power.
Evan Selinger is a professor of philosophy at the Rochester Institute of Technology and an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity. Follow him on Twitter @evanselinger.
Advertisement