fb-pixel Skip to main content

Puffer coat Pope Francis fooled many with his ‘drip.’ But experts say image underscores danger of AI misinformation.

Fake images of Pope Francis wearing a large white puffer coat, created with AI-image generator Midjourney.Midjourney/Reddit

Over the weekend, an image of Pope Francis spread like wildfire on social media platforms. In place of his daily attire of austere white cassocks, he adopted a more modern look — a long, trendy white puffer coat, his traditional pectoral cross resting above a cinched waist. The pontiff exuded “drip” and “swagger,” in the words of online commenters, and received widespread praise for his striking style choices.

While many assumed the picture was genuine, given its realistic look and a pope known for breaking convention, it was entirely fake.

The visual is the latest example of an image produced using an easily accessible image generator powered by artificial intelligence. In this case, the program was Midjourney, with the deepfake image shared on Reddit before it quickly exploded on Twitter.

Advertisement



“Welcome to the future! These things have come to stay,” Mathias Risse, Berthold Beitz Professor in Human Rights at Harvard University who has studied the implications of artificial intelligence, said by e-mail.

Risse said he was “actually a bit puzzled as to why this particular photo went so viral,” given that technology has been capable of producing similar pictures — and even fake videos — for years. But it is a “vivid and prominent reminder of larger trends,” he said, since such manufactured visuals are bound to spread misinformation.

The hyper-realistic image of the pope was not the only AI-produced visual to make the rounds online recently. As rumors of Donald Trump’s arrest swirled, a detailed image of the former president being arrested by New York City police was shared widely on social media. That, too, was created by Midjourney.

But, as at least one Twitter user pointed out, the pope going viral is not out of the norm — whether as a result of becoming a meme or for auctioning off his custom-made and autographed white Lamborghini. So the image of the pope rocking a jacket that resembled one sold by Balenciaga had some basis in reality.

Advertisement



BuzzFeed News interviewed the creator of the image, Pablo Xavier, 31, who said he was high on shrooms when he came up with the idea.

Xavier, who lives in the Chicago area and declined to share his last name, said he was shocked how quickly the image spread online and that it was “definitely scary” to see how people believed “it was real without questioning it.”

As the image demonstrated, it will become increasingly difficult to differentiate between what is real and what is fake as the technology advances, experts said.

“You can’t always know what all of the specific tells are of an AI-generated image,” said Alejandra Caraballo, clinical instructor at Harvard Law School’s Cyberlaw Clinic. “What this is doing is lowering a barrier of access to be able to create images like this that are fake, especially with how realistic they look.”

Their production on a mass scale may “lessen people’s trust — once again — in what they see online,” she said.

Whether an image is authentic or not may be secondary, Caraballo said. Much of the misinformation online is “easily debunkable” and the most harmful AI-generated visuals usually derive from conspiracy theories, she said.

“It might be harmless, in this instance, what the pope is wearing,” she said. “The pope wearing a weird jacket doesn’t seem that out of the ordinary compared to what he normally wears. But I think the concern is — what if it’s a politician in some kind of compromising scenario?”

Advertisement



During breaking news, such as mass casualty events, some people may use the technology to spread misinformation on a massive scale, Caraballo said. Havoc is bound to ensue.

“But like the pope [image], I really think it has to lend itself to something that’s slightly plausible in the first place,” she said. “It still has to be something relatively believable for it to spread. You can’t just post a picture of the pope at 9/11 or the pope on the moon or something.”

Generative AI is revolutionizing the way individuals interact with technology, said Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at MIT, in an e-mail. The widespread use of manipulated images, she said, “poses a serious threat to the integrity of public discourse and can lead to the spread of misinformation and disinformation.”

Rus said the existence of images generated with AI highlights the potential for the technology to be used for malicious purposes and underscores the importance of developing guardrails for using it, ethical and responsible principles for AI, and the need for robust regulations.

“Images created with generative AI can raise additional risks and concerns, beyond those associated with traditional images, such as: the spread of misinformation, damaging reputations, manipulation of public opinion, [and] undermining of trust in institutions,” she said.

Advertisement



There is not much the average person can do aside from maintaining a degree of skepticism and looking for independent corroboration, Risse said.

Risse said he believes the biggest risks will come from videos manipulated with AI rather than photos. The ability to produce deepfake videos will soon be readily available, with enormous potential for creativity “and much to worry about, of course.”

“Videos still function as a kind of epistemic backstop — if it’s captured on video, we know it happened,” he said. “If that’s no longer the case, then people can start to create their own realities, and at the very least it will be a while before it all gets sorted out. And technology makes it ever harder to sort things out in the first place.”

While there are currently a number of safeguards on image-generator websites like Midjourney, as the technology becomes more accessible and broadly distributed, there will be fewer checks on them, Caraballo said. She also cited the dangers of videos created with AI, such as deepfake pornography.

“The technology is moving faster than law and policy can keep up,” she said. “I think we really need to take a step back and really think through the consequences.”


Shannon Larson can be reached at shannon.larson@globe.com. Follow her on Twitter @shannonlarson98.