fb-pixelViral videos are like junk food: We click even when we know they're unhealthy Skip to main content
IDEAS

Why we click on stuff we know we won’t like

Our research reveals a gap between what goes viral and what people want to go viral.

Günter Albers/Adobe

Why is there a deluge of divisive and negative content on social media? Is it simply that — despite what we’d prefer to think about ourselves — we like this kind of stuff?

After all, research suggests that negativity — especially about our political opponents — is likeliest to go viral online. Maybe the basic explanation is that everyone wants to see content that reinforces their political biases and makes their enemies look bad.

But a new study we’ve published in the journal Perspectives on Psychological Science suggests this couldn’t be further from the truth. Our engagement behavior does not reflect our preferences. We don’t actually like a lot of the content we “like,” share, or click on.

Advertisement



We recruited a representative sample of people in the United States and surveyed them about what they think goes viral on social media versus what they think should go viral.

People overwhelmingly reported that divisive content, negative content, moral outrage, and misinformation all go viral — and past research tells us that this is a fairly accurate view of the situation.

However, people expressed a strong preference for divisive, negative, or false content to not go viral on social media. Instead, they said, it’s educational, nuanced, and positive content that should go viral.

We were heartened to discover that these preferences did not differ strongly between Republicans and Democrats.

Why would people engage with social media content they explicitly say they do not like?

One possibility is that social media content is like junk food or fast food. Those foods are engineered to appeal to our primitive preferences for sweet, fatty, or salty foods and to keep us eating — even when we know it is unhealthy for us. In the same way, social media platforms are designed to keep us scrolling.

Advertisement



Another likely explanation is that extremists dominate social media conversations, while more moderate individuals barely speak up. New research finds that political extremists prefer divisive social media posts about political opponents while moderates don’t, meaning that a vocal minority of users may be contributing most of the online toxicity.

Indeed, studies have shown that 0.1 percent of users are responsible for 80 percent of the misinformation spread on X, formerly known as Twitter, and a similar pattern has been found for toxic Reddit comments.

Of course, it could be that we should not trust what people say (their stated preferences) and instead focus more on what people do (their revealed preferences).

But in the case of social media, we think it’s important to pay attention to stated preferences, for the same reason it’s important to pay attention to stated preferences about unhealthy food or cigarette smoking. Most Americans wish they could adopt a healthier diet and about 70 percent of cigarette smokers say they want to quit but are unable to because these products are designed to be addictive. A growing number of people think social media companies are also designing addictive products at the expense of consumer welfare.

One potential solution to this problem is to integrate people’s stated preferences into social media algorithms — amplifying content people explicitly say they want to see, instead of content they can’t look away from.

Platforms such as Facebook and TikTok already have features that do this, but they aren’t very prominent. For instance, TikTok has a “not interested” button on videos, and Facebook gives you an option to select “see fewer posts like this.” But social media platforms could be doing a lot more to integrate user feedback into algorithmic ranking systems so that people’s feeds start to reflect what they say they want to see.

Advertisement



Facebook has already proved that algorithmic changes that reduce the spread of harmful content are possible. For instance, Facebook boosted the visibility of news from trustworthy sources before the 2020 US presidential election — only to switch the algorithm back right after the election.

Social media companies may not have much motivation to improve their platforms, since they have a financial incentive to promote attention-grabbing content in order to gain advertising revenue. Indeed, Mark Zuckerberg has previously refused to implement some suggestions from his own staff for reducing the spread of harmful content on Facebook out of fear that they might reduce user engagement.

So changes to social media may have to come from the outside, potentially through regulation. For example, our findings indicate that there could be strong support for laws that mandate more transparency around the inner workings of social media algorithms or laws that allow users to tweak their social media feeds to show them content that aligns with their preferences. In our study, 87 percent of participants believed they should have more control over what they see on social media.

Advertisement



In response to an article about our previous research, Facebook argued that social media conversations ”reflect what is happening in society and what’s on people’s minds at any given moment. This includes the good, the bad, and the ugly.”

But our research suggests that most people believe that social media amplifies too much of the bad and the ugly — and not enough of the good.

Steve Rathje is a postdoctoral researcher in psychology at New York University. Jay Van Bavel, a professor of psychology and neural science at New York University, is the author of “The Power of Us.”