scorecardresearch Skip to main content
IDEAS

The line between human and machine begins to blur

People talk about computers as if they’re alive and talk about themselves as if they’re computers. Meghan O’Gieblyn explores what that means.

Meghan O’Gieblyn, author of “God, Human, Animal, Machine: Technology, Metaphor, and the Search For Meaning."Courtesy of Meghan O’Gieblyn

The author Meghan O’Gieblyn, who writes the Cloud Support advice column for Wired, once studied theology at a fundamentalist college. Although she no longer holds a Christian worldview, she has noticed that people in the supposedly secular world of computing often discuss minds and machines with religious themes that generally go unacknowledged. O’Gieblyn explores these connections in her new book, “God, Human, Animal, Machine: Technology, Metaphor, and the Search For Meaning.” Our conversation has been condensed and edited.

You observe that “all the eternal questions” that traditionally preoccupied philosophers and theologians “have become engineering problems.” What do you mean by that?

Advertisement



I’ll start with a specific example. When I was studying theology in college, one of the big debates was free will. Can humans freely choose their actions? Or are all of our decisions preordained by God? This question is emerging now in artificial intelligence. Can an intelligence that we create evolve its own goals and objectives? There’s already some evidence that machine learning models can develop strategies and solutions that their designers didn’t predict.

And the question of free will also comes up a lot in conversations about predictive analytics. We have algorithms that can process so much data about us that they can predict our behavior very well already. It’s been proposed that in the future, algorithms might be better at predicting our actions than we ourselves will be. This possibility raises the question of how free we really are.

On the one hand, digital technologies make us creators, and we have this kind of godlike role in bringing these forms of intelligence to life. On the other hand, these machines become so complex that they’re almost omniscient-like deities in the most advanced cases. One starting place for my book was that I noticed that a lot of theological language and religious metaphors have been creeping into tech criticism.

Advertisement



Is the simulation hypothesis — the idea that our so-called reality is a computer-generated illusion — an example of what you’re talking about: the collision between the technological and the theological?

The simulation hypothesis is an argument from design. It’s a form of creationism that holds that our world is a software program created either by some higher species or, in some versions, by our future descendants. High-profile proponents of the theory include Elon Musk and Neil deGrasse Tyson.

When proponents defend the theory, they often say it’s not a religious narrative because it doesn’t appeal to anything supernatural. But it still raises a lot of the same questions that traditionally have been explored within the religious framework. For one thing, it implies the world has a purpose of some kind. It raises the question of whether there’s going to be an afterlife — whether we’re going to be pulled out of simulation at some point into some other level of reality. The theory has attracted a lot of attention from philosophers and theologians, some of whom have written about simulation ethics — how should you conduct yourself if you want to maximize your chances of being rewarded by the programmers? And then there’s the problem of evil within the simulation. What does the existence of suffering say about the programmers and their values? Or is evil just a glitch in the matrix?

What’s the downside if scientists and engineers compare the human mind to software?

Advertisement



In the early 1940s and ‘50s, cybernetic pioneers put terms like learning, understanding, or thinking in quotations to signal figurative language. Today, those terms often are taken as literal descriptions of what machines are doing. This changes how we view our own minds. In everyday speech, often without thinking about it, we say we’re processing new information or that we have to retrieve memories from our brain. This is very misleading. We don’t have a hard drive in our brain, and this isn’t how memory works.

The metaphor also leads to a lot of mystical thinking. If you say that the mind is software, you assume it’s an abstract pattern of information without any material reality, sort of like the soul. This is why we have transhumanist theories that the mind can be somehow transported outside the body. Just as we transfer software from one machine to another, we can upload our minds to a supercomputer or to the cloud. But there’s really no evidence the mind can be separated from the body. It’s like deferring to the old mind-body dualism legacy in Western philosophy.

After having a robotic dog delivered to your home, your husband quickly went from denying it’s alive to talking to it as if it had deliberately stalled going to bed at night. There’s a psychological explanation for why he acted like the robot has agency and preferences: If something is sufficiently lifelike, our brains treat it as possessing qualities exhibited by living beings. But are there alternative ways of looking at the situation that we should take seriously, perhaps more spiritual ones?

Advertisement



For many centuries we’ve seen this psychological impulse through a spiritual lens, where the category of personhood is much larger than it is today in the West. People have assumed that rocks and trees and even man-made objects have some kind of consciousness, or that humans can maintain a social relationship with them. In a way, the rise of social AI and the pervasiveness of smart technologies is returning us to some sense of this way of being in the world.

I think the question of what’s conscious and what’s not is going to become very blurry. And I think there’s a longing for those distinctions to break down. Many of us are exhausted with this view that humans are at the center of the universe or that we’re sort of the only self-aware beings. I think that there is something deep inside of us that wants to see the world as being alive.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology and an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity. Follow him on Twitter @evanselinger.