When social media companies tell us they’re working on fixing problems like misinformation and hate speech, do their assurances seem hollow? Why don’t Big Tech companies use their wealth and power to rein in the chaos they’ve unleashed?
The truth is, digital information really is out of control — and stricter government regulations or more ethical leadership in Silicon Valley wouldn’t change that. That’s the argument that writer and software engineer David Auerbach makes in his new book, “Meganets: How Digital Forces Beyond Our Control Commandeer Our Daily Lives and Inner Realities.”
Auerbach says we dramatically underestimate the unpredictability of giving a massive amount of people access to a massive amount of computing power — a combination he calls “meganets.” We may like to think that new policies or leadership in tech companies would make it possible to stop the circulation of hatred and dangerous conspiracies, but today’s hyperconnected meganets will be resistant to such approaches, he believes.
Instead he says it’s time for different strategies — which involve pushing back against some of the meganets’ central tendencies. These include soft measures of social control, such as slowing down the speed at which information is shared to decrease virality. He also suggests limiting voices that monopolize online conversations, algorithmically discouraging large groups of users from forming, and injecting novel content — information that is only loosely related to what people typically consume — into sites like Facebook, Reddit, Google, and YouTube to decrease homogeneity.
Advertisement
My interview with Auerbach has been edited and condensed.
What are meganets?
They are our persistent, evolving, and opaque data networks that determine and condition how we see the world. They are the combination of hundreds of millions of servers and hundreds of millions of people interacting nonstop in ways that are too fast to be observed and patrolled even by the companies that supposedly operate them. They include social networks, cryptocurrency networks, online gaming, and governmental identity networks. Meganets are a general phenomenon that’s come about simply from the mass deployment of network technology around the world and the ongoing nonstop engagement connecting people across them.
Advertisement

Why did you write the book?
Everybody feels like things are getting out of control — that we collectively and individually have less control over life processes than we did 20-30 years ago. People are looking for solutions that haven’t been found, and they feel lost.
How does the meganet concept help us understand problems like misinformation circulating on social media?
If you look at misinformation from a traditional perspective, the general attitude is that it would be great if Mark Zuckerberg and his team would crack down through better top-down management. But because of a meganet, I’m saying the information is far less under administrative control. The algorithms are constantly evolving, not just in reaction to what programmers do but also through what users do. If you engage with one of these networks, you have no guarantee that you’re interacting with the exact same algorithm a minute later than you did a minute ago. And this causes huge problems. Not just because it makes things opaque and a Facebook engineer couldn’t tell you why content didn’t get filtered out but because at this speed and scale, you’re always playing catch-up — solving the last problem instead of the current one.
Advertisement
Are you saying that when companies like Meta tell us that they’re working on content moderation, their approach isn’t as effective as they would lead us to believe?
Pretty much, they’re overstating it. There’s even a Facebook memo that conveys the message that the company should be careful not to let the narrative get established that it isn’t in control over its own systems. They would rather be thought of as evil than as powerless.
Why?
To say you only have a coarse-grained level of control is to admit you don’t know how to fix your problems. At the same time, if Facebook says they are working on a problem and can’t fix it, people feel frustrated, generating a sense of futility. It also generates conspiratorial thinking — a sense that the company could fix it but doesn’t want to.
In the meganet age, there’s been a huge increase in conspiratorial thinking. People blame the Washington elite, the tech elite, the Bilderberg group, QAnon, and more. It’s very tempting to believe that some sort of shadowy group is singularly responsible for why everything sucks.
What’s an example of coarse-grained control?
What did Facebook do around the 2020 election? It put a blanket ban on political ads and restricted people from forwarding any link to more than five people. That’s not the work of a company that has things under fine-grain control.
Then there’s hate speech. I can go on Facebook and find a bazillion antisemitic sites within minutes. There’s no financial incentive for Facebook to host this stuff. They just can’t get rid of it.
Advertisement
Are you saying we’re not going to get very far if we expect companies to better classify misinformation and disinformation?
Yes, especially because there’s no centralized authority that enough people will see as being competent to make that decision. COVID-19 is the big example here. All those efforts to convince people to take vaccines did not move the needle for 30 percent of them. It didn’t matter what you put in front of their faces.
Is this because society is polarized?
We’re beyond polarized, which implies there are only two groups. We’re factionalized. Because the nature of a meganet is to group like-minded people with each other, you end up with self-reinforcing subcultures that will tune out anything that doesn’t validate their shared narrative. In the meganet era, the 20th century model of mass media is dead. There’s no trusted central authority who can say “this is good” and “this is bad.”
How, then, can we tame meganets?
Accept the limits of control, and don’t try to monitor everything on a microscopic level. Try general fixes that aim to slow down viral content regardless of who is speaking. The interventions shouldn’t be targeted at specific types of content but focus on things like how often people can post and how many people you can forward information to.
Are you saying that anyone, even a non-contentious person, should have their reach limited like a controversial politician?
Advertisement
Your question assumes there’s some consensus about who is highly contentious. I think that’s part of the problem. By the time you even make that determination, it’s too late. The highly contentious person isn’t going to be identified until their reach goes really far. If you don’t want virality to explode, you need to slow down information to minimize its spread.
So, even if Tom Hanks is America’s Dad and people widely consider him to be benevolent, his ability to rapidly communicate to a vast audience should be limited?
We need to begin from the assumption that filters are in place and algorithms already limit America’s Dad’s reach. He isn’t completely free to communicate. We just don’t know exactly how he’s limited.
Should any information be allowed to quickly go viral?
Yes, major world events and emergencies like a big earthquake. That’s manageable.
But how do these exceptions avoid the classification problem? Won’t there be too much disagreement about what should count?
If you can get 95 percent of the people to agree on something, then it’s OK. Do 95 percent want to know if there’s a terrorist attack in New York? Yes, they do, and so you can let that information through. There are still matters of overarching public concern and things that are indisputable. Not many, but there are some.
What’s the incentive for social media companies to change their ways when virality drives engagement?
Everybody is miserable. People are getting sick of social media, and the changes I propose might make online life more pleasant.
Evan Selinger is a professor of philosophy at the Rochester Institute of Technology; an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity; and a scholar in residence at the Surveillance Technology Oversight Project. Follow him on Twitter @evanselinger.