fb-pixelWhy Facebook and Google should pay you for your data - The Boston Globe Skip to main content

Why Facebook and Google should pay you for your data

Daniel Hertzberg for the Boston Globe

W e all work for Mark Zuckerberg. With every adorable kitten picture we post. With every “like” or “haha” we click. With every stupid “Which Brady Bunch Sibling Are You?” personality quiz we take.

We are the grunts who have built Facebook’s online empire — and Google’s, too, our clicks and posts feeding detailed psychometric profiles the Internet giants sell to advertisers for huge sums of money.

Now, a growing number of economists and Silicon Valley solons are asking the inevitable question: Isn’t it time we got paid for our labor?

Facebook and Google would argue we’re already getting paid — with free search, free maps, and a steady stream of updates on Uncle Tony’s patio project. But McDonald’s employees would never accept free burgers as their only compensation. Why should the Internet-era economy work any differently?


Now, to be clear, your data labor is probably only worth a few hundred dollars per year, for now. Do the math: While the online Goliaths are worth hundreds of billions of dollars, they’ve also got billions of us working for them.

Yet if the artificial intelligence economy reaches full flower — causing revenue per user to skyrocket — our work could be worth exponentially more. Training a system to perform even a relatively humble task, like facial recognition, means feeding it reams and reams of data — millions of tagged photographs from our backyard barbecues, family reunions, and college graduations.

Of course, Facebook gets us to tag our pictures for free, because it’s easy and it’s fun and we don’t know any better. But if the Internet giants started paying us for our work, they could demand more sophisticated labor: recording and transcribing conversations to improve voice recognition software, for instance, or labeling images of human disease to aid in AI diagnoses.

In their new book, “Radical Markets: Uprooting Capitalism and Democracy for a Just Society,” law professor Eric Posner and Microsoft researcher E. Glen Weyl estimate that, if the AI economy takes off, a family of four could earn $20,000 per year for its data.


That’s a healthy supplement to any family budget. But it could be something even more. If AI wipes out millions of jobs, as some fear — if we are, in fact, training the robots that will replace us — then data payments look like more than extra cash in the bank. They look like an important blow for economic justice.

A shift to compensated online labor could also break the Google-Facebook stranglehold on the Internet. If competitors start paying for our data, they may be able to lure us away from the duopoly and into a more variegated Web.

Of course, embracing the idea of “data labor” could have some strange implications. We’re already wary of the Internet’s viral culture. What happens when we incentivize it on a massive scale?

And how about the dignity of work? Is transcribing conversations for Siri, or annotating legal documents for a lawyer-bot, an honorable vocation? How should we feel about feeding the machines that would wipe out entire professions? Are data payments, at bottom, the bribe we take for betraying our own kind?

But if you step back a bit — if you put these payments in some historical perspective — they start to look a little different. Not quite as threatening. Maybe even empowering.

DATA WORK, POSNER and Weyl argue in “Radical Markets,” is just the latest in a string of underappreciated labors. “Women’s work” — managing the home and raising children — has long been considered a private endeavor, unworthy of compensation or legal protection. Before Internet-era disruption laid record companies low, media entrepreneurs got rich off of artists who, by and large, received meager pay.


It didn’t have to be this way online. The earliest designers of the Web actually built in systems that allowed for payment. Every piece of information could be traced to its origin, every alteration duly recorded, and everyone properly compensated. In France, a proto-Internet service known as Minitel included a micropayment system. And there was even talk, at one point, of attaching postage stamps to e-mail.

What would eventually become the mainstream Internet, though, started not as a commercial enterprise, but as a collaborative effort among government, the military, and academia. The emphasis, as Posner and Weyl write, was on lowering barriers to participation, not incentivizing and rewarding labor.

“Information wants to be free,” Silicon Valley declared, in a near-perfect distillation of its lefty-libertarian ethos. But eventually, Google and Facebook swooped in, stockpiling all that free information — all that free labor — and birthing a sort of online surveillance state that’s something like the opposite of what the Internet revolution promised.

“It’s probably the most precise failure of idealism in history,” says Jaron Lanier, a dredlocked Microsoft researcher widely viewed as a sort of Silicon Valley seer. “There have been a lot of revolutions that backfired. But I can’t think of another one that created an outcome that is so precisely the opposite of what it intended to create.”


Lanier, who first raised the idea of data micropayments in his 2013 book “Who Owns the Future?,” doesn’t view them as a restoration of the revolution, exactly. But he does see a path to a far better Internet.

A chance to humanize it.

Take artificial intelligence. Many of us imagine a sort of free-standing, vaguely malevolent force — created by humans, perhaps, but ever-freer of our control. “In truth,” Lanier says, “AI is very hungry for data, and that data doesn’t come from angels, it comes from people.” We’re telling people they’ll no longer be necessary in an AI economy, he says, but that’s a lie. “In fact, we still do need the people,” Lanier says, “because we need their data in order for this thing to run. It’s just that we don’t want to pay for it.”

If we were willing to pay people for the true value of their data, he says, then we could rein in our anxiety about the robots coming to take our jobs — and give average people a real foothold in the future.

As it stands now, even the workers who are actually paid by the largest technology companies receive only a tiny fraction of the firms’ income — just 5 to 15 percent, according to Posner and Weil. Compare that to 80 percent at big service-sector outfits like Walmart.

If AI-driven companies come to occupy a substantial swath of the economy, and don’t change their business models in significant ways, we could see labor’s share of overall income drop from its current 70 percent to something like 20 to 30 percent, Posner and Weil estimate.


At that point, data payment moves from novel idea to political necessity.

POSNER AND WEIL acknowledge that it’s difficult to predict the course of technological innovation. It’s possible that artificial intelligence doesn’t take off as expected, and the value of our data may remain modest. That would take some of the shine off the argument for data payment, which other scholars view with greater skepticism.

Ethan Zuckerman, director of the Center for Civic Media at MIT, says data payment is an idea “that I like a lot, conceptually, and dislike a lot, practically.” When you get down to it, he says, small payments for data look a lot like what Google-owned YouTube is already doing. Users upload cooking videos or Fortnite explainers, YouTube monetizes them, and the company shares a thin slice of the proceeds with the creators.

That’s all well and good, Zuckerman says. But it’s hardly a game changer for the average Internet user, and it does nothing to get rid of the online surveillance model that we’ve grown so wary of.

It doesn’t stop YouTube from mining our videos for data and selling what they find to advertisers. And it doesn’t prevent the next Cambridge Analytica scandal, which saw a consultant to Donald Trump’s presidential campaign improperly gain access to the data of about 87 million Facebook users.

“To me, it’s kind of a broken system that we’re trying to make a little less broken,” he says.

Andrew Keen, the acid-tongued author of “The Internet is Not the Answer,” goes even further, calling data payment a philosophical error. This is Silicon Valley utilitarianism at its worst, he says. It suggest that “our happiness can be added up and monetized.”

“Our data is who we are,” Keen says. “It’s ourselves.” And selling that information amounts to “a kind of digital prostitution.” Sure, Google and Facebook may be harvesting some of our data already. But why sell them more?

Once we start incentivizing online activity, users might post more of the corrosive, viral stuff that everyone claims to hate but lots of people click. Tech companies could respond in worrisome ways, too. Some sites might aim to create the most addictive content they can imagine — figuring users will demand smaller payments for games they want to play anyhow.

“It’s a really disdainful, ugly idea,” Keen says, “and it will only bring out the worst in everyone.”

Keen’s answer is stronger regulation of data privacy, along the lines of the European Union’s approach, and a return to basic economics. Users of Google, Facebook, and other websites should pay for the services, he says, just like they pay for food or rent. And those services —

amply compensated — should drop the surveillance model.

It’s an intriguing model. But for Lanier, the Microsoft researcher, it’s a limited one.

The best future wouldn’t be a straight subscription model, he suggests, or even a pure data payment approach. It would be a combination of both. We’d pay for services and get paid for our digital work. We’d be producers one moment and consumers the next. The possibilities for experimentation and innovation would be broader. The Internet would be better.

But if that hybrid seems too complex to users, if they’d all prefer to just pay for more online subscriptions — like they’re already paying for Netflix, or Amazon Prime, or their local newspaper — well, that would be all right, too, Lanier says. “If we can even get to the point where that’s the argument, then we’ve already solved the problem,” he says. “That’s a very luxurious debate to have, because that presumes we’ve got past our current idiocy.”

David Scharfenberg can be reached at david.scharfenberg@globe.com. Follow him on Twitter @dscharfGlobe