scorecardresearch Skip to main content

The Israel-Hamas war is awash in deadly misinformation. What can we do about it?

How to decipher Israel-Hamas misinformation
WATCH: Reporter Hiawatha Bray has what we can do to identify and stop the spread of fake videos and information.

Two weeks into the Israel-Hamas war, inaccurate news reports of atrocities have flooded social media, spreading horror and rage worldwide.

Take the deadly explosion at a Gaza hospital last Tuesday. Major news organizations quickly broadcast claims by Hamas of a deliberate Israeli attack and a death toll of 500. Later video evidence suggested the explosion may have been caused by an errant Palestinian-launched rocket. US intelligence agencies estimated a lower death toll of 100 to 300. Hamas has since produced no physical evidence that the blast was caused by an Israeli bomb.

It’s still unclear what happened. But the original claim was repeated by a huge number of social media users and raced around the world, spawning bitter denunciations of Israel and riots and protests throughout the Middle East.

Advertisement



This chain reaction of events was misinformation at its most disturbing — and its most dangerous. And media analysts predict a continuing flood of fake news and confusion as the war rages on.

“I think this is probably the worst-case scenario when it comes to misinformation, disinformation, propaganda, and the like,” said John Silva, senior director of professional and community learning at the News Literacy Project, a Washington, D.C., nonprofit that monitors and debunks online falsehoods. “It is far easier for people to spread false and misleading information with this scenario than with most others, because you have such a deeply emotional issue for a lot of people.”

Silva cited a Jordanian friend who posted a false report about Israeli responsibility for the hospital bombing on X, the social network formerly known as Twitter. “He saw that and it just felt true, so he shared it,” Silva said. When Silva replied privately that the report was false, his friend deleted the posting but has continued to post other inaccurate stories about the war.

Advertisement



Since the outbreak, social media companies have said they’re cracking down on fake war news, by increasing efforts to block or reduce the visibility of false and inflammatory postings. But that goes against their recent history, in which they have substantially cut back on moderating user content.

Meta earlier this year laid off workers developing automated tools for detecting false information on its Facebook platform, while X sharply reduced the size of its trust and safety team, tasked with filtering out fake news postings.

In addition, X’s new policy that lets anybody purchase a “verified” account could be making matters worse. The social network used to attach a blue checkmark to the names of individuals or organizations that provided proof of their identity. These blue-checked sites were regarded as trustworthy information sources. But under new owner Elon Musk, anyone can get a blue check by simply paying $8 a month.

Elon Musk, owner of X, the service formerly known as Twitter. According to the media watchdog company NewsGuard, accounts with blue checkmarks are now being used to deliberately spread false information about the Israel-Hamas war. Nathan Laine/Bloomberg

According to the media watchdog company NewsGuard, accounts with blue checkmarks are now being used to deliberately spread false information about the Israel-Hamas war. The company found that 74 percent of X’s most widely read false messages about the war originated from blue-checked accounts.

“The social media companies are at least complicit in what is going on,” said Lee McIntyre, a research fellow at Boston University and author of the book “On Disinformation.” “They are dismantling or at least doing less content moderation than they used to.”

And don’t count on government to help any time soon. In July, a federal court in Louisiana blocked US officials from pressuring social media companies to limit questionable content, calling it a violation of the First Amendment. On Friday, the US Supreme Court lifted the restrictions but agreed to take up the case in its current term, which could lead to a permanent ban on federal involvement in social media moderation policies.

Advertisement



Some misinformation researchers say it’s primarily up to social media users to protect themselves (and each other) from false information, by developing new habits. If we can’t tell fact from fiction at today’s speed of information flow, researchers say, we need to slow down, prepare ourselves mentally, and be extra careful about what we share and with whom.

McIntyre at BU thinks a possible solution is something like a “vaccine” for the mind. He’s part of an organization called the Mental Immunity Project, which argues that people can be taught to produce mental “antibodies” that screen out false narratives, much like a flu shot can shield a person from contracting the virus.

“The mind has an immune system kind of like the body does,” said Melanie Trecek-King, the project’s education director and an associate professor of biology at Massasoit Community College. “It has filters that let in some information and blocks out others.”

With the right kind of training, she said, people can develop ways of thinking that make them far less susceptible to spreading fake news or misinformation.

Advertisement



The concept dates back to the early 1960s, when psychologist William McGuire wanted to figure out why some US soldiers switched sides after being captured during the Korean War. McGuire developed techniques that can make soldiers more resistant to “brainwashing” by helping them think more critically about the arguments used by their captors. People can be trained by exposing them to false ideas and teaching them how to recognize that they are false.

“It teaches the mind how misinformation works,” said Trecek-King. “When the person is exposed to misinformation in the real world, they’re inoculated against it.”

Palestinians check the area of the explosion at al-Ahli Hospital in Gaza City on Oct. 18.Abed Khaled/Associated Press

McIntyre said the Gaza hospital explosion is a case in which mental immunity should have kicked in. Major news outlets and social media users both should have noted that the original claim came from a single, highly biased source. That should have made them doubt the story and demand evidence before sharing it online. “One of the principles of mental immunity is making sure that you’re not racing to a conclusion,” he said.

McIntyre also cited a video that purported to show a Hamas fighter shooting down an Israeli helicopter. In fact it was an image from a video game. The most dangerous aspect of such fakery is the way it gets amplified by uncritical users. “People who say, look at this atrocity and pass it on,” he said, “and they didn’t check to see that this image is fake.”

A properly immunized consumer won’t just believe any image. “It is so easy to create things, especially with these new generative AI tools, or even with basic Photoshop,” said Silva. Better to hold off on reposting such images until they’ve been confirmed accurate by trustworthy sources, he said.

Advertisement



The News Literacy website has a section devoted to debunking inaccurate stories about the Israel-Hamas war. The site also contains downloadable training guides to teach users how to recognize false or deceptive messages on any topic and directs users to trustworthy sources of information.

As the hospital story suggests, even established news organizations can get it wrong. Silva said one hallmark of a reliable news source is its willingness to admit mistakes. For example, The New York Times ran an editors’ note admitting that it should have been more cautious in reporting the bombing.

“A correction is a sign of credibility,” said Silva. “It means they own it. They’re accountable.”

Silva’s encounter with his Jordanian friend demonstrated another valuable strategy — friends shouldn’t let friends spread misinformation. “There is research that the sooner someone calls a piece of information into question, the less viral it is likely to go,” he said.

Such messages should be backed up with evidence and composed with courtesy. And don’t turn it into a debate, Silva said. Don’t disparage the person’s worldview, politics, or religion. Instead, ask them to take another look at the post in question.

Silva said the message should be, “It looks like this one thing you shared, it’s not accurate, it’s not true and it could make things worse.”


Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him @GlobeTechLab.