Human consciousness is a mystery that has occupied great thinkers for centuries, from philosophers who have puzzled over the nature of the mind to biologists trying to figure out how a network of neurons can work together to create self-awareness. On one hand, consciousness is a basic trait we all have in common; but, on the other, it’s abstract, transparent, undefinable, and—worst of all, from a scientific point of view—unquantifiable. That makes it very hard to study.
Now that may be changing. Over the last few years, Giulio Tononi, an eminent neurobiologist at the University of Wisconsin, has been working on a way to quantify consciousness. He argues that it’s possible to define consciousness mathematically. More than that, he says, it’s possible to measure it. Measuring consciousness—assigning a number to your current state of awareness—might sound impossible. But, using a combination of information theory and neuroscience, Tononi has come up with a plausible way to gauge how much consciousness is unfolding inside a brain at any given time. He’s already taken some rough measurements of people who are awake, sleeping, and even in vegetative or “locked-in” states.
His work is based on a potentially revolutionary theory about how consciousness works. Tononi argues that consciousness is made of what he calls “integrated information”—information that happens to be stored in our neurons, but that could, in principle, be stored in many different media. The information is “integrated” because of the complex, hierarchical way in which it’s organized: Smaller systems integrate elegantly into bigger and bigger ones, combining their information to create something that’s more than the sum of its parts. The human brain, Tononi says, is unequalled in its ability to organize its information in a meaningfully integrated way. And it’s possible to measure the degree of integration, he argues, by using a web of electrodes to measure how a brain reacts to an external stimulus, like a magnetic pulse.
Tononi uses the Greek letter “phi” (rhymes with “eye”), which is traditionally used to denote the Golden Ratio, to represent how much integrated information a system has. And he’s designed a series of experiments to measure how much phi a brain has in different states—awake, asleep, under anesthesia, even in a coma. The measurements are rough, and so far the techniques only work on human adults. But they open the door on some tantalizing possibilities. Working with Christoph Koch, another consciousness researcher, Tononi is trying to estimate phi for a common roundworm—to put a number on just how deeply it can integrate information. And it’s easy to imagine an ethically complicated future when we might be able to compare the “consciousness” of a fetus, a whale, an adult—even a computer.
Tononi’s work is in its infancy, and some researchers point out that it doesn’t help us answer what’s called the “hard problem” of how awareness can actually arise from matter. There’s no question, though, that Tononi has pushed the study of consciousness forward: It’s now up to skeptics to propose an alternative approach.
This summer, Tononi’s published a new book, called “Phi,” that explains the concept in a novel way—not through dry scientific exposition, but in a narrative, modeled on Dante’s “Inferno,” in which a fictionalized Galileo travels through time and space to meet with great scientists like Francis Crick, Alan Turing, and Charles Darwin, who explain many of the key ideas. The book is lavishly illustrated with paintings and scientific images, poetically written, and unashamedly speculative, weighing some of the “meaning of life” questions Tononi has been thinking about during his long career as a consciousness researcher. It’s a fascinating celebration of the complexity of the brain and mind.
Tononi spoke to Ideas from his office in Madison. (This conversation has been edited.)
IDEAS: Consciousness seems incredibly mysterious and indefinable—almost poetic. What do you mean when you say that we can define it as being made of “integrated information”?
TONONI: Well, consciousness has two really important properties....First, it’s very informative. Every experience is a very, very special one, and so it’s enormously informative, because it rules out trillions of other experiences. If you want a system that’s able to have consciousness, it must be a system that’s able to distinguish among trillions and trillions of states of affairs. And then, at the same time, consciousness is very integrated. When you have it, it’s one thing, and not subdividable into pieces. Take what you see in front of you right now—say it’s a visual scene. You have that particular experience, and dividing it in two, experiencing only the left side or the right side, makes no sense. So every experience is not only one out of many—it’s also one.
IDEAS: Francis Crick, who features in your book as a character, once said that human beings are “nothing but a pack of neurons.” Does your theory support that idea, or refute it?
TONONI: It’s the “nothing but” that’s a problem. It’s not enough to be made of parts, and to be connected in a very complicated way....Once you have a way to measure integrated information, you realize very soon that lots of structures, as complicated as they may seem, they actually aren’t able to do it.
IDEAS: So could other kinds of systems, like computers, be designed to integrate their information in the right way, and become conscious?
TONONI: In principle, yes. But the information has to be organized in the proper way, and it looks like it’s not easy to get that kind of organization. You’re not going to casually find it in some computer, because computers aren’t built with the goal of integrating information.
IDEAS: So how can you actually measure how much “phi” a brain has?
TONONI: Basically, we use transcranial magnetic stimulation to inject current into the cerebral cortex from the outside, to “knock on the brain.” Then, using electrodes, we see how the brain responds in time. We predicted that, when we probed the part of the brain that’s generating consciousness, the cortex, we would see the brain respond as a single entity, but with a large variety of states.
That’s what we found. If I knock on your brain when you’re awake, I see that the brain reacts like one big thing, but also in complicated ways. It changes rapidly. Conversely, if I do exactly the same thing when you’re in a dreamless sleep, I still get a big response, but it stays localized. It doesn’t go to the rest of the cortex. The neurons stop communicating with each other, they don’t talk to each other anymore....Instead of being one entity, the brain breaks down into pieces. We used the same approach with anesthesia, and we basically got the same result as in deep sleep. In REM sleep, when you’re dreaming, it’s more like when you’re awake.
The most recent study we’ve done is with [potentially] “locked in” patients. We’re knocking on the brain to see how it reacts, to figure out whether they might be conscious. Right now doctors use behavioral criteria—following their eyes, seeing if they respond to minimal stimuli—but that can be misleading. Behavior is not a good enough to guide to whether you’re in there. So we used our approach, and it worked. In cases where doctors were reasonably sure there was consciousness, we got a consciousness result. In cases where doctors thought there was no consciousness, we didn’t see it. Now we’re trying to come up with a practical number for measuring consciousness in these patients.
IDEAS: If you can measure consciousness, you might find it in some unexpected places. Do you think about the ethical implications of measuring consciousness, or of ranking how conscious different beings are?
TONONI: Of course. In the study with the locked-in patients, there are no ethical issues, in the sense that it’s only good ethically: People want to know if their loved one who’s been bedridden and unresponsive, is he dead, or is there someone home? Then there are other cases.
Right now, we can’t measure phi [in nonhuman subjects]....But, take dolphins. Right now we say dolphins have big brains. Well, it’s probably good to have a big brain, instead of a small one—but is there a lot of consciousness in there, or not?....Integrated information theory teaches us that sometimes appearances can be deceiving. It’s very important to not just look at how complicated a brain looks—you need a theory to make an informed guess. Having something that tells you, there’s probably a lot of consciousness in here, or there’s probably none, is very important ethically. I’d rather have a good measure than not.
So far, many scientists have said that consciousness is a mystery, that it will never be figured out by science. If you leave it like that, then anything is possible. If you have a good rational theory, you still may not be perfect, but I’d rather have a good measure, and turn it into a legal question.
IDEAS: Does your theory of consciousness make you think of it as something more solid, more real?
TONONI: I think consciousness is a fundamental part of the universe—just as fundamental as mass, charge, and so forth, and it’s just as real. In fact, I think conscious things are more real [than material things] like stones and cars and mountains and planets. Conscious things are really real. They don’t need an external observer. They exist in and of themselves. It’s a more real form of existence, because it’s observer-independent.
IDEAS: You’ve written an unusual book—it’s filled with beautiful images, with fables and stories, and the diction is almost from the 17th century. It’s a kind of celebration of consciousness. It brings science and art together.
TONONI: I wrote it probably over eight or nine years. It took a long time. I wrote it that way because I fundamentally think that consciousness is a sort of pivotal point where the humanities and sciences meet, naturally. I didn’t want to write another book which is just talking about experiments. Consciousness is a subject where science really needs to deal with who we are, in the most fundamental sense.
Joshua Rothman is an Ideas columnist. He can be reached at email@example.com.