T here are approximately four ideas for how society can improve its tech giants: Privacy rights, antitrust enforcement, a new kind of tax on “tech” (since tech is centralizing a new kind of power and wealth), or a new way for individuals to participate in a tech economy (that would spread out the wealth to all the people who created that economy, such as by paying people for their data).
I’ve come to favor the fourth option, because the first three don’t directly address the worst problems.
People from all over the political spectrum are furious with the current batch of social media companies, which I’ll define to include Google and its subsidiary YouTube. The primary discomfort generally reported is not with economic power or data insecurity, though those issues also trouble a great many people. Instead, there’s a primary sense of the world being darkened, of the worst corners of human nature being amplified at the expense of our sanity and survival.
Republican Representative Devin Nunes is suing Twitter for over a quarter billion dollars over nasty tweets that target him. Let us not forget, however, that the massacre in New Zealand was designed for Facebook streaming by an individual who was radicalized on social media. High tech darkness amplification is not partisan, but universal.
Those who complain the most must also spend the most on the companies they complain about. President Trump has outspent all the Democratic presidential contenders combined on Facebook, even as he complains about being maligned on the platform. Even Elizabeth Warren must spend money on it.
Tech platforms uniquely influence who gets to say what to whom and how loudly.
Since the prime directive for such a company is to accelerate “engagement,” the most piercing and annoying speech will tend to be the most favored. Thus, the most irritating people, those who trigger the deep emotions related to the primal behaviors of fight or flight, tend to be amplified the most. The nastiest side of human nature becomes dominant.
Nice people are amplified too, of course, but they tend to become the fuel for a higher volume nasty backlash. Black Lives Matter, which I supported, added data to the system that was used automatically and perversely to identify, introduce, and spur on a renewed KKK and neo-Nazi movement in the USA, for instance. That was engagement amplification in practice.
There are endless jeremiads about how the current nature of networking is bringing out the worst in people, but these usually end with a tragic sense of resignation. The problem is the core business model of companies like Facebook and Google. It’s too late, runs the familiar conclusion; we’re stuck from now on with our worst selves forever.
But even that isn’t the worst of it. The same companies proudly assert that they are in a race to dominate artificial intelligence. All the data gathered from the people who are being made nasty (through the process of engagement) is being used to train the AIs that will put those same people out of work. In the meantime, the soon-to-be-obsolete humans can get temp work at tech companies like Uber, which seem optimized to get rid of the people as soon as AIs can take over.
The crisis therefore goes beyond emotions and omni-defamation to core spiritual identity. At a recent gathering of high school students, I was asked the darkest question I’ve ever heard from a teen: “If AI is going to take the jobs, why did our parents have us? Why are we here?”
Fanatics like the New Zealand shooter cling to blood and soil; they perceive no other option for finding validity. Everything else has turned into a meme.
It is impossible to direct a social media company to block content that inspires existential dread. There is no way to define that well enough for an algorithm, a human moderator, or a court of law.
Perhaps a change in the business model could help. If data is the new oil, and data comes from people, why not pay people for their data? Free online video is more often sadistic than paid video. Maybe paying and being paid can be an avenue for improving civilization.
What if people were owed money for the use of their data when they were targeted by a political ad, enough that such targeting was no longer a viable business model? What if people could earn money from contributing data to AI? Might they not develop a justified sense of pride in helping to program the robots? Might the data and the robots not perform better once people are awakened to their new roles? Might not the economy expand greatly once we admit that a lot of people are productive in new ways? What if people find it easier to find meaning in a world that tangibly values them?
If the business model of companies like Facebook is the core problem, then surely the way out of our mess must be to change the business model. Currently, the model is that all human activity on networks is financed by third parties who hope to influence the immediate users. How can that model lead to anything other than a world optimized for manipulation and unreality?
Just as users pay for Netflix, engendering a new era of “peak TV”, they’d start to pay for a new era of peak social networking that doesn’t amplify darkness. (Those who cannot pay would be supported by new public services analogous to public libraries.) The same users would accumulate a wide and growing array of royalties from their data.
This is an idea that makes enough sense that certain tech companies have tried to discredit it even though it isn’t yet very prominent in policy circles. Facebook and Google have both stated that data from individuals isn’t worth much, but they only say that in special settings. When it comes to arguing for stock value and market caps, then the data race merges with the AI race, and is trumpeted as the most valuable aspiration in the world.
We can’t expect an online utopia, any more than we can expect an offline utopia. People will always be annoying to one another.
But we can and must demand better from tech companies. Tech had until recently been the last bastion of optimism. We could all agree on that one bright spot in our future, but now we only see a cliff we are stampeding towards.
If we want to regain hope, meaning, and even a slight capacity for kindness when we disagree, we must reform our tech world.
Jaron Lanier is a computer scientist, musician, and Microsoft researcher. The opinions expressed here are his own, not Microsoft’s.