Think back, if you can recall, to the simpler times of 2018: Fergie was slaying the national anthem (in the pre-Beyonce sense of the word). Stephen Miller was settling in to hibernate through the sunny season. And a naive tech columnist whose blithering optimism is going to get him hurt someday wondered aloud if the then-blooming “deepfake” phenomenon might spell trouble for objectivity.
“What happens when the rough edges are smoothed out, and the illusion becomes more convincing than the truth?” I asked. “Will reality become increasingly more a matter of preference?”
Sometimes I just want to shake me. Deepfake videos use machine learning technology to seamlessly swap out one face for another. And since November of last year, thanks to easily acquired software like FakeApp and the endless resources of Internet porn, the trend has metastasized into a full-blown crisis. Reddit banned its primordial deepfake forums, and while Pornhub claimed to do the same back in February, referring to the videos as “nonconsensual content,” BuzzFeed found them running rampant on the site.
It only took me 30 seconds to find one and five to close the browser (but only after I sat through the K-Y Jelly ad that preceded it). Ariana Grande’s face has been spirited in from some anodyne TV interview. She’s talking to someone in a loop about how she likes to spend her free time (in her own body, presumably). The body attached — ravaged by a man whose head is out of frame — is also a body detached. You feel like you’re witnessing a crime, but you’re not — yet.
So looming is the threat of the deepfake because it is so legal. While reality seems to matter less and less across the board, it’s ironically the defining factor between what’s legally permissible and what isn’t in the realm of deepfak es. Where instances of “revenge porn” can in some states be considered illegal on the basis of invaded privacy or nonconsensual disclosure, deepfake porn makes no claim on its own veracity: The illusion is the product, the materials are readily available in the public domain, and thus the product is protected as entertainment.
But beyond the laboratory of porn, the increasing sophistication of deepfakes is sounding alarms across newsrooms — already fighting a pox of fake news.
BuzzFeed this past week teamed with “Get Out” director Jordan Peele to produce an instantly viral video that appeared to feature President Obama — an oddly blank-eyed and perhaps allergy-afflicted Obama — speaking to the camera in a familiar mode and cadence: “We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time,” he calmly warns, “even if they would never say those things.” After some examples (including some unprintable bits about the current president), the screen splits to reveal Peele playing ventriloquist to a digitally manipulated Obama.
From there, the flaws make themselves more apparent — the slight pixelation around the mouth, the slightly off timbre of his voice, those dead eyes — but had these snags of the uncanny not caught my attention, had my focus been compromised even a little, I may not have realized I was watching a puppet show. BuzzFeed also supplied a list of additional ways to spot a deepfake in the wild — though the less wild they get, the harder they become to spot, and the more dangerous they get.
“Without ‘video proof’ — our shining source of truth — the world could get messy, fast,” warns Kristen Dold in a piece for Rolling Stone. “Trolls and bots won't just be sharing incendiary articles on social media, they’ll be creating videos to go with them and bolster their claims.” Imagine altered speeches, doctored press conferences, multiple versions of the same sentences uttered by multiple versions of the same mouth. Now prepare for them. “Our greatest protection at the moment? Believing less of what we see,” writes Dold.
Remember that blithering optimism I mentioned? It’s acting up again.
For as dire as Dold’s forecast is, and as severe a storm as we can see forming along this front, I can’t help but think we may require a disaster like this to wash the mess we’ve made away and give us reason to rebuild. I can’t help but consider whether the complete vaporization of truth online might actually be preferable to the uncertain state we’ve found ourselves in. If we can change our view of the Internet from “information superhighway” to “tabloid on steroids,” we may stand a chance at maintaining power over it. Truth, after all, can be bent to serve any purpose, but once it’s snapped into pieces, what use is it to anyone?
Deepfakes could ruin the Internet, sure. But they can also teach us to look harder and listen more closely — and to question everything. Does it count as optimism to long for cynicism?