What is a computer? Steve Jobs famously described the computer as “a bicycle for the mind” — a tool to help us remember, think, discover, and create. Computers are high-tech, universal tools; they’re so useful, in fact, that some of us spend all day in contact with some sort of digital device.
There’s another way, though, to think about what a computer is: not as a high-tech tool, but as a profound intellectual achievement. In a deep sense, the power of the computer is as much about ideas as it is about circuits. The incredible, open-ended flexibility that makes computers so powerful — and that lets us use them to figure out everything from climate modeling to “Jeopardy!” — is, in fact, the product of more than two thousand years of painstaking, hard-won intellectual progress in low-tech fields like mathematics, logic, and philosophy. Like the tide line on a beach, the computer marks the furthest we’ve progressed in a philosophical quest to understand, perfect, and extend the reach of reason.
The creation of the modern computer in the 1940s was a watershed moment in that quest; today’s super-fast computers are still essentially built on that achievement. Now, however, we’re poised to take another leap forward. That leap is the quantum computer — a computer built on an atomic scale. Though they’re still mostly theoretical, quantum computers would use individual atoms to do their computations, instead of circuits etched in silicon. Such a computer wouldn’t just be built differently — it would also think differently, using the uncertainty of particle physics instead of the rigid on/off circuitry of a modern computer.
For years, excitement about quantum computing has been growing among scientists and tech visionaries. Quantum computers, if they succeed, promise to make a whole new range of problems accessible to computers, from breaking difficult codes to unlocking complicated biological processes now out of reach for even the fastest machines. The hype has, at times, verged on science fiction, and there are still many skeptics who argue that quantum computers might be physically impossible, or at least too technically complicated to work.
In recent years, however, a series of increasingly capable prototypes have brought the future a little closer. And as that future approaches, it is also starting to attract another kind of attention: Quantum computers, some researchers argue, will help us think differently about what we can and can’t know, and forge a new understanding of how the world of logic and information connects to the material one. Quantum computing, says Seth Lloyd, a researcher at MIT, might “allow us to understand the universe in its own language” — a prospect that has energized philosophers as well as scientists.
When the modern computer was invented nearly a century ago, it drew physicists, mathematicians, and philosophers together around a single, incredible object. Quantum computing is already doing that again. If our computers embody the best we can do in logic, math, physics, and engineering, then the definition of “best” might be about to change.
For the last seven decades, computer circuits have been getting smaller and faster. At the same time, physicists have been learning more and more about quantum mechanics, the strange set of physical laws that govern very small objects, like individual atoms and electrons. By the early 1980s, physicists and computer scientists had begun to make eyes at one another. What, they began to ask, would computing look like when it got very, very small? Once a transistor was no larger than an atom, the strangeness of physics would make it very hard to program and operate a computer in the usual way. But, they realized, that might also present an opportunity — a chance to get past the limits of modern computers.
To understand that opportunity, you have to start by understanding just how different the quantum world is from the world of today’s full-size computers. The quantum world defies logic, and is hard for even physicists to understand. A tiny particle can be in two places at once; it can be in more than one “state” at the same time (for example, charged and not charged). Computers, on the other hand, operate on rigorous, systematized logic: Everything they do, at the most basic level, is built on a set of instructions that comes down to 1s and 0s. Is a switch on, or is it off?
This logic isn’t just technical. It’s the soul of the machine itself. A computer is really built on a super-logical way of thinking, an approach to problem-solving called “computability.” Computability holds that even the most complex problems can be solved by breaking them down and tackling them one simple step at a time. Its modern father was the British mathematician Alan Turing, whose “theory of computability” in 1936 offered two important insights. The theory argued that all problems — even the most irrational-looking ones — could be broken down into steps; and moreover that those problems could all be broken down into the same sorts of steps, no matter how different the problems seemed from the outside. Those steps, written in the language of logic, could be so simple that a machine could do them; in fact, they could all be done by the same machine. Turing’s colleagues called the machine he’d outlined a “Turing machine.” We call it a computer.
Today, we take it for granted that a computer, which is a single device, should be able to do a great many disparate things. But the connectedness of those problems was a big discovery. For thousands of years, philosophers and mathematicians had looked for a way to chart those connections: When Aristotle invented logic, for example, he envisioned a total system of reasoning, in which it was always possible to get from “first principles” to even the most complex solutions. Computability took that vision and turned it into a practical tool. It gives us an optimistic way to think about the potential of human reason: Get a fast enough machine, and nothing is beyond your reach.
But if that logic and rigor give a computer its power, they also define its limits. One of them is conceptual: In 1931, the mathematician Kurt Gödel proved — logically — that all logical systems will inevitably contain holes and inconsistencies. In other words, there are some things it is logically impossible for a computer program, or any logical system, to do. Other limits are more practical: Some problems, it turns out, are so complex they can’t be computed in real life, even by the fastest computer. The world’s most powerful codes are built on this notion: They rely on the fact that finding the prime factors of a very long number, as Michael Sipser, a mathematician and complexity theorist at MIT, puts it, “would take way longer than the lifetime of the universe.”
With quantum computing, today’s researchers are trying to perform a sort of jujitsu. Improbably, they want to use the uncertain, counterintuitive, probabilistic world of quantum mechanics to perform calculations — and to break through the practical limits of modern, “classical” computers.
Unlike a computer circuit, atoms, electrons, and other atomic or subatomic particles aren’t always hard and concrete; instead, they are more like clouds of probability. Get small enough, and you don’t exist here or there; you exist here and there, simultaneously. A quantum computer builds this strange kind of reality right into its hardware. In a normal computer, each tiny switch represents a “bit,” reading either “0” or “1.” The basic unit of quantum computing is called a “qubit” — an atom, electron, or other tiny particle that might be in one state, or another, or, crucially, somewhere in between. Quantum computing relies upon the fact that qubits can contain far more information than bits — not just 1s and 0s, but combinations of the two. If we could learn to harness their processing power, we could use qubits to work far faster, and do far more, than we could with a traditional computer.
Qubits, unfortunately, are very, very hard to work with. Because they’re so small, they’re easily jostled; if that happens, the computer “decoheres” and becomes a bunch of atoms in a pile. They are exquisitely sensitive; look at them too soon, and you can change the result. And they are very hard to program. One of the best demonstrations of quantum computing to date, done in 2001 in an IBM lab in California, used a huge magnet and five atoms to figure out a very simple piece of arithmetic: that the factors of 15 are 3 and 5.
But what tantalizes some researchers is that, compared to a traditional computer, a quantum computer would operate much closer to the way the universe works. When two atoms bump into one another, the universe doesn’t wait to mull over all the possibilities, or run a program to figure out what happens next: It gives you an answer instantly. Quantum computers, because they’re based on actual atoms rather than man-made circuits, work in a similar way: They could solve some problems in a single gulp, rather than step by step. That quantum magic requires careful, canny programming, and it won’t work for every single hard problem. But there are a handful of very practical problems quantum computers would be able to solve with breathtaking speed — in minutes, instead of eons. Chief among them is simulating the way very complicated atomic systems behave, which is a key question in medicine and engineering. Another is cryptography, which is a huge driver of research funding into quantum computers. (Indeed, much quantum-computing research money has come, Lloyd says, from “agencies with three initials.”)
That’s the dream, at least. The rise of quantum computing theory has been accompanied by a vigorous debate about whether it can work at all. It may never be technically feasible to build the computers at a large scale. Some also think that the quantum approach to computing is, in some basic sense, getting physics wrong. As Scott Aaronson, a computational complexity theorist at MIT, has written, “It’s entirely conceivable that quantum computing is impossible for some fundamental reason.”
But as it has evolved, quantum computing has become a powerful conceptual tool for thinking about some of the deep questions all of us have about the universe. If it’s possible to “program” individual atoms, does that suggest that the universe itself is a giant computer? If that’s the case, then do the laws of logic which set limits on man-made computers also apply to the universe as a whole? Are there computations it can’t perform? Paradoxes it can’t escape?
The quantum computers of the future might, like the computers of today, become powerful conceptual symbols in themselves — not just tools, but metaphors for thinking. The quantum computer is exciting, Lloyd says, because it helps us to “understand the universe, what it is, where it came from, and where it’s going.” Sipser sees quantum computers as a place where many of the most interesting problems meet: “It’s interesting philosophically,” he says. “Certainly it’s fascinating mathematically. And it has this real-world relevance.” For Aaronson, quantum computing could still be important even if it fails: If its failure reveals something new about the way we understand physics, he writes, “then that’s by far the most exciting thing that could happen for us.”
Quantum computers, in short, may become extraordinarily powerful tools — but, even as an idea, they are already vessels we can use to continue our exploration of the world of logic, math, and reason. Computers, on our desks or in our pockets, have extended our practical reach in countless ways. But they’ve also extended our conceptual reach, bringing us to places which, for thousands of years, we’ve only dreamed about. Philosophers, mathematicians, and scientists have always wondered just how far we could extend the powers of reason and logic. Quantum computing gives us a new way to stand right at the edge of what’s possible, and then to push a little further into the unknown. That’s worth it for its own sake. “It’s an abstract landscape,” Sipser says. “You just want to know what it looks like.”Joshua Rothman is a doctoral candidate in the Harvard English department and an instructor in public policy at the Harvard Kennedy School of Government. He teaches novels and political writing.