Late last month, Google—the world’s most powerful tool for pulling information from the immense archives of the Internet—took its first steps toward helping people erase their pasts. To comply with a landmark ruling from Europe’s top court, Google posted an online form that lets its European users request the removal of links to possibly damaging personal information from their search results.
The basis for the court’s decision is a legal concept known as “the right to be forgotten,” which has taken root in Europe as a result of widespread concern about the potential for the Web to keep people forever chained to their past mistakes and failures. And though that right does not exist in the United States, the worry behind it very much does. The more information is collected about us and stored—not just old Facebook party pictures, but data residing in digital archives of all kinds, from those belonging to government agencies to our own inboxes and calendars—the higher the likelihood that it could be misused, publicized, or just discovered at the wrong time. And the more it piles up, the more it starts clogging the systems that use it. As Elizabeth Churchill, an expert on human-computer interaction who has studied human and digital memory, wrote, we are “living in a world of digital bloat, our untamed and insecure data strewn all over the place.”
As this mountain of data continues to grow, experts have begun trying to figure out ways to contain it—essentially, to take a huge system designed to store and retrieve information, and teach it to forget. Some of the efforts, including the European court’s recent decision on Google, are in the realm of the law, while others involve technological solutions. Taken together, they reflect a provocative idea that’s being embraced by technology thinkers and designers: that, as one researcher in the field put it, “forgetting should be more integrated into digital systems.”
Behind this line of thinking is the emerging knowledge that the act of forgetting can be as powerful and necessary a tool as remembering. And with the Internet growing exponentially richer and more informed about us every year, and more deeply integrated into our lives, the more urgent this tool becomes.
One machine that already does forgetting quite well is a humble, carbon-based gadget called the human brain, which is constantly getting rid of “data” it doesn’t need, even as it processes experiences and turns them, selectively, into memories. Over the past 10 years or so, scientists have definitively shown that forgetting unnecessary information is a crucial component of thinking and learning. And while our understanding of how this process works is still too muddy to provide a blueprint for teaching our computers to do the same, it suggests the next big advance in how our society manages information will be to make computers capable of something that happens inside our own heads, quietly, all the time.
To a world obsessed with memory—whether measured in gigabytes or in standardized test scores—the idea that forgetting could be a virtue does not come naturally. Having a powerful memory tends to make people seem smarter, better at their jobs, and more fun to talk to; they know more facts about the world and make their friends feel good by remembering little details about their lives. Above all, it’s hard to shake the intuition that retaining more of our experiences, and being able to go over them in our heads long after we’ve had them, makes for a richer, more engaged life.
Forgetting, on the other hand, usually feels like failure: Not only is it incredibly frustrating to find yourself unable to recall something you used to know, it can also be a genuine handicap in life, especially if it becomes chronic. For this reason, we tend to overlook the ways in which acts of forgetting, broadly speaking, help society function. The legal system, for instance, is designed to “forget” the crimes of children in the interest of giving them a chance to start from scratch as adults. Family life is predicated on the ability of the people involved to forget inevitable moments of discord that would otherwise eat away at any bond. Trauma victims, as well, must work to put painful memories behind them in order to not be haunted by the past.
Forgetting is also increasingly being described by psychologists as key to human cognition. As Scientific American reported in a 2012 cover story, psychologists and neuroscientists have found that forgetting the stuff that’s not worth our energy to remember can be just as important as storing the stuff that is. Benjamin Storm of the University of California Santa Cruz, who has published multiple papers on the benefits of forgetting, argued in a paper published last month that being good at forgetting outdated information is associated with thinking more creatively. Melonie Williams Sexton, a postdoctoral researcher in cognitive psychology at Vanderbilt University, recently published a study demonstrating that test subjects who were shown six objects and asked to forget three of them were more likely to remember the other three and were able to describe them more vividly.
The parallel between human forgetting and its digital equivalent may not be perfect, but it’s instructive nevertheless. Think of the last time your computer cried out for help when you filled its hard drive with too many MP3s and TV shows. The over-accumulation of data—not just on our own computers, but in the companies that track and analyze our behavior, the online archives of the websites we read, and so on—can easily become a liability.
“It’s like having a garden,” said Churchill. “[Pruning it] is a necessary, ongoing process, and typically, we have built very poor tools and services for [doing so].”
When it comes to the Internet, we’re dealing with a garden of infinite acreage being tilled by millions of gardeners, most of whom are too busy adding to the vegetation to spend any time pulling out weeds. In effect, what we have on our hands is a piece of technology with the potential to radically expand our access to the past—which sounds great, until you think about how much past there is, and what lies there.
One of the first to formally sound this alarm was Oxford University professor Viktor Mayer-Schonberger in his influential 2009 book, “Delete: The Virtue of Forgetting in the Digital Age,” in which he warned that, if left to its own devices, the Web would force everyone in our society to live in fear of being judged in perpetuity for every single thing we do and say. At the time, it wasn’t much more than a cri de coeur deployed with the hope of provoking reform. But recently legal scholars, computer scientists, cognitive psychologists, and even tech startups have begun seeking ways to put the notion into practice.
Many proposals focus on law and policy. One of the leading thinkers on the topic is Meg Ambrose, an assistant professor in communication, culture, and technology at Georgetown University, who is working on a book about “digital oblivion.” In her view, the court system is the best vehicle to adjudicate and enforce people’s requests to have information taken down from the Web. David Hoffman, the global privacy officer at Intel, has taken a slightly different tack; in a blog post last spring, he proposed a new regulatory body made up of government officials and industry representatives that would serve as the “central point of contact” for such people, and would work out universal guidelines for deciding which information should and shouldn’t be obscured. (Hoffman also suggested putting the FTC in charge of making sure that “data-controllers” like Google actually abide by the agency’s decisions.) Benjamin Keele, a librarian at Indiana University’s McKinney School of Law, has argued for a law requiring companies to put forth legally binding guarantees about how long they plan to hold onto customer data before destroying it.
In all of these schemes, the role of government, or some public oversight, is key: Ambrose points out that while the European decision has been seen as a blow to Google, forcing it to comply with a court order, it actually kept power in the company’s hands by allowing it to decide what counts as a legitimate claim by its users.
Of course, a legal process for taking information down can only ever serve as a means of redress, a way for motivated and informed people to push back. The trouble is, most of us just aren’t that vigilant, which is why some people believe that true protection from data permanence will start with the technology. This is the approach favored by Mayer-Schonberger, who argues that pieces of digital content—including e-mails, photos, etc.—should be programmable with “expiration dates” that cause them to self-destruct at the appointed time. He is a particular fan of communication apps like SnapChat and Frankly, which allow people to send each other photos and text messages that disappear after a few seconds.
A more offbeat proposal was put forth by a University of Manchester lecturer named Martin Dodge, who argued in a 2005 paper titled “The Ethics of Forgetting in an Age of Pervasive Computing” that computer memory should be patchy—equipped with technology that, like human memory, gradually fades or loses details. Dodge’s reasoning is that computer memory is so “ubiquitous and merciless” that it could be made more humane while still remaining useful if it worked a little less well.
As the example of your MP3-clogged computer suggests, cluttered memory isn’t just a problem for individual privacy: It can also impede the very functions it’s supposed to improve. Archives, in other words, can actually become worse, not better, as they swell in size, because specific pieces of information become so hard to find as to be useless. “It’s like that classic hoarder problem, where you are looking for something, but you can’t get it because it’s in that cupboard that’s so jammed,” says Churchill.
One unique effort to solve this problem is taking place in Germany, where a team of researchers is working with the L3S Research Center in Hannover on a piece of software they have cleverly named “ForgetIT.” Their hope is to alleviate the side effects of what they call the “‘keep it all’ approach in our digital society,” by scouring digital archives for unimportant content and setting it aside, condensing it, or deleting it. The term that ForgetIT researchers Nattiya Kanhabua and Claudia Niederée are using to describe this process is “managed forgetting”—and while they are only starting to figure out how it might work, the basic idea is that the software will learn, over time, what each user considers relevant or desirable content.
Currently, the ForgetIT team, which includes experts in cognitive psychology as well as information management and has backing from IBM, is experimenting with an early version of the software to process collections of digital photographs, which tend to pile up in numbers so large that people never actually end up going back and enjoying the good ones. They envision a benign version of the operating system from the movie “Her,” which quickly scans thousands of the protagonist’s e-mails and calmly informs him she’s saved only the 86 worth keeping.
The goal of something like ForgetIT isn’t just to forget stuff, but to forget the right stuff—and that, according to Ambrose, is the key challenge for the whole field.
For Ambrose, the most salient fact about the digital world’s memory isn’t how permanent it is, but rather how woefully arbitrary. It’s a misconception, she says, that the Internet is a “cruel historian” that never forgets anything: In fact, servers and sites are constantly going offline, taking all their data with them, and pages are deleted en masse whenever websites are redesigned or shut down entirely. According to one study, fully 85 percent of content disappears from the Web within a year of being posted, and 59 percent does within a week. Yes, your embarrassing pictures could live on forever—but your valuable published work, or an archive a lawyer might need, could just as easily vanish overnight.
What the Internet really is, Ambrose says, is a “lazy” historian, one that can’t be bothered to figure out what matters and what doesn’t. Instead of aiming to make the digital world more forgetful overall, she suggests, we must strive to make it more deliberate about what it forgets.
Which brings us back to the human brain, which has the nifty capacity to flush information that we don’t need based on how often we retrieve it. The brain is actually capable, according to new research, of forgetting things “on purpose” in order to rid itself of clutter and make the process of remembering more efficient. It is a hard drive that clears out its own MP3s. “Humans tend to forget very quickly things that are not relevant for them,” said Kanhabua, of ForgetIT, who considers the brain a deliberate inspiration for the architecture of their software.
To her, and the other thinkers wrestling with this challenge, our global network of computer memory is an astonishing achievement that’s also a problem only halfway solved. The tally of what we save has grown impressive, random, and slightly scary. Now comes time to address the why. “That means being thoughtful both about what we keep and what we delete,” Ambrose said, “and it always has meant that. Preservation is not a one-way street.”
Leon Neyfakh is the staff writer for Ideas. E-mail firstname.lastname@example.org.