Someone’s always worried that something new is going to lead us down a slippery slope to ruin. When a documentary about deceased chef Anthony Bourdain included footage in which a synthetic voice imitates his speech, critics said digital fakery could turn into a downward spiral to a world where we don’t know what’s real. Some Republicans who oppose vaccine mandates say the shot is a step toward the government continually depriving citizens of personal choices. And now privacy and cybersecurity professionals are sounding the alarm that Apple’s new approach to scanning phones for child sexual abuse material is a springboard to further surveillance.
Although slippery slope warnings are everywhere, many of them simply aren’t credible or realistic, much like comparisons to slowly boiling frogs (which, in reality, will try to get out of the heated pot) or a person who dies after a thousand cuts. Indeed, doomsayers often commit textbook cases of poor reasoning by supporting dire predictions of slippery slopes with flimsy evidence. As a result, in my work as a philosopher of technology and a privacy advocate, one of the surest ways to induce eye-rolls or outright opposition to an argument is to invoke the slippery slope.
And yet: I’m on a crusade to convince others that many surveillance technologies do pose serious slippery slope risks. In fact, when Villanova professor Brett Frischmann and I wrote our book “Re-Engineering Humanity,” we made the case that if we truly want to think critically about future technological developments, we must take slippery slopes seriously. So how can we tell when the slope really is slippery — when things happening now are poised to spiral out of control? Apple’s new phone-scanning technology is a case in point.
When there isn’t a slope at all
There are costs to getting hoodwinked by bad slippery slope arguments. Falling for them is a recipe for inaction. For example, tech companies have been reluctant to improve social welfare by policing unacceptable content because they want to steer clear of getting denounced for chipping away at free speech. That’s why Google deserves praise for taking a risk a few years back when it removed search results for web pages hosting revenge porn. Rather than giving in to the claim that policing some material is a slippery slope to full-on corporate censorship, the company acknowledged that nonconsensually shared pornographic imagery is in a special category: It lacks positive expressive value, assaults dignity, causes emotional and reputational harm, and increases the risk of physical assault.
Taking a stand against revenge porn won’t lead to a slippery slope for censoring anything else — including merely controversial ideas — because these images are distinctively toxic. The only way to claim otherwise is to say someday Google might treat this as a precedent for removing analogous content that deserves free speech protections. Given the defining features, nothing credible fits the bill.
It’s similarly bogus to claim that vaccine mandates create a slippery slope to a future in which government agencies control much more of our behavior. The vaccines have been deemed safe and are used only to protect against COVID-19. So when Senator Ted Cruz calls a vaccine mandate for federal employees a “slippery slope towards excessive government control,” because it could increase the likelihood of “school closures” and “more draconian lockdowns,” he distorts the underlying public health justification for mandates, which is to avoid the very outcomes he cites. When Tucker Carlson speculates that a vaccine mandate could lead to forced sterilizations and lobotomies, he’s playing fast and loose with the idea of the government getting involved in our personal health. The US government’s response to the pandemic is a narrowly defined endeavor. There’s no legitimate comparison to eugenics programs — no more than the immunization shots schools have long required.
In fact, using history as our guide, we know that the factors that lead to authoritarianism bear no resemblance to vaccine mandate policies. Instead, the culprits are propaganda, attacks on credible journalism, and punishment of criticism and dissent with the help of secret police and widespread government surveillance. Since these approaches remain potent, governments don’t need to craft a new social engineering playbook to dominate us.
And yet, some slopes really are slippery
But failing to spot genuine slippery slopes can be detrimental, too. It can leave us stuck with problems we could have avoided. When we’re young and money is tight, it’s tempting to put off saving for retirement — potentially leading to a lifetime of poor spending decisions and inadequate investments.
Unfortunately, it’s often easiest to appreciate a slippery slope in hindsight, after a sequence of events runs its course. When Ethan Zuckerman, founder of the Institute for Digital Public Infrastructure, describes QAnon as “a reservoir at the bottom of that slippery slope,” he means adherents believe its outlandish lies because, earlier on, they lost trust in one mainstream institution after another. After enough disappointment, these folks became vulnerable to the lie that being savvy requires rejecting mainstream thinking altogether.
Right now, bad responses to the pandemic underestimate the pull of a slippery slope. Delta Air Lines’ decision to charge its unvaccinated workers an additional $200 per month for insurance because they are increasing the company’s financial risk opens the door to unfairly raising premiums for employees with other conditions that can be expensive to treat, including diabetes and cancer. In other words, it’s too easy for Delta to treat this particular cost-cutting measure as a precedent; it has the incentive to find other examples of unequally shared medical risks. In contrast, recall the example of Google and its non-slippery slope: Google is not incentivized to find more content to obscure. To avoid a slippery slope, Delta should require vaccination for employees who aren’t working fully remotely, like other companies do.
Apple’s new technology for screening images for evidence of sexual abuse has related characteristics. Under this new effort, announced in August, Apple is scanning iMessage communications and photos to detect child sexual abuse. The messages and photos remain encrypted and can’t be seen by Apple employees, but automated systems can compare them with databases of abuse images maintained by the National Center for Missing and Exploited Children. Apple can alert the center if a match is found. Apple had been doing something similar for two years with material it stores in iCloud; this applies the technology to material that’s still on individual phones.
Regardless of the motive to protect children, Apple is, like Delta Air Lines, setting a dangerous precedent: It has created a fresh justification and technical means for examining our private information. Princeton researchers Jonathan Mayer and Anunay Kulshrestha, who have built a similar system to study its limitations, concluded it could be “easily repurposed for surveillance and censorship.”
Unfortunately, governments worldwide have a strong incentive to ask, if not demand, that Apple extend its monitoring to search for evidence of interest in politically controversial material and participation in politically contentious activities. Indeed, using past as prologue, comparable behavior suggests they will. As the Princeton researchers note, “WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy.” Bottom line: The strong incentives to push for intensified surveillance combined with the low costs for repurposing Apple’s technology make this situation a real slippery slope.
Applying the slippery slope argument to Apple has critics from both sides.
For example, Edward Snowden thinks the claim doesn’t go far enough to describe the dangers of Apple’s technology. “This is not a slippery slope,” he writes, “it’s a cliff.” He’s confident it’s only a matter of time before “Apple . . . will lose control” of what its software scans on users’ phones. Similarly, the Electronic Frontier Foundation asserts: “That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.”
On the other hand, University of California-Berkeley professor Deirdre Mulligan notes that “there are always going to be slippery slope arguments” and insists that worrying about potential abuses in the future doesn’t justify stopping Apple from doing good work right now.
Claims about slippery slope surveillance are deeply pessimistic, which means critics can write them off as mere techno-panics — fear-based exaggerations that wildly overstate threats and vastly underestimate possible responses to the real dangers. People like me who see slippery slopes in surveillance technologies are often accused of embracing technological determinism, a position widely discredited in fields like philosophy and science and technology studies. Technological determinism posits that change is primarily driven by technology itself rather than by the humans who create it, use it, and regulate its use.
Here’s why these objections don’t bother me. Snowden is wrong to be so definitive about a cliff rather than a slope, because we can’t predict with any certainty what Apple will do in the future with technology for scanning images. There are too many variables to consider. The slippery slope arguments, however, have the advantage of recognizing the range of possibilities in a complex situation. They’re never about foregone conclusions, like knowing all of the dominos in a perfectly arranged stack will fall after we push the first one.
Admitting uncertainty makes advocacy challenging. Dogmatic assertions can be emotionally compelling, and slippery slope claims can’t be stated in a mathematically precise language like “There’s a 90 to 95 percent chance that Apple is wrong when it says it’ll hold the line on how it’s using the technology.”
The best we can do — which is quite a lot — is follow the lead of scholars who have studied slippery slope dynamics, like law professor Eugene Volokh and philosophy professor Douglas Walton, and specify as precisely as possible what causal mechanisms create strong incentives and disincentives for specific behaviors to occur. In the case of Apple, as Snowden points out, the costs of having software scan our files for additional information are minuscule, and governments are enticed to gobble up data ostensibly to improve public safety. That’s a dangerous combination. It’s not a fait accompli as he indicates, but the odds are stacked against us.
When I and my collaborator Woodrow Hartzog of Northeastern University argue that facial recognition technology poses a slippery slope to an Orwellian future, we’re not embracing technological determinism: We’re being realistic about incentives and disincentives. We don’t believe society is slavishly bending to the dictates of technology. Instead, we argue there are too many legal gaps to prevent widespread and abusive facial surveillance. Reforming the law by adding more limits on how people use the technology won’t go far enough. There are too many incentives to keep relaxing the limits over time.
Don’t dismiss slippery slope arguments simply because they’re speculative and scary. Instead, reject ones that project terrible chain reactions without presenting credible evidence of their likelihood. Focus on the details of what precedents establish, the strength of incentives for people to do harmful things in the future, and the realistic possibilities for pumping the brakes before things go too far. In some cases, the road to hell is greased by good intentions.
Evan Selinger is a professor of philosophy at the Rochester Institute of Technology and an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity. Follow him on Twitter @evanselinger.