scorecardresearch Skip to main content
IDEAS

Pondering a world without humans

Antihumanists and transhumanists see technology in totally different ways, but some of them come to the same conclusion: Homo sapiens’ time may be up.

The annual migration of wildebeest in Tanzania, a process that began long before humans existed.Joe Mwihia/Associated Press

Humans are at the center of most discussions about both the environment and technology. One goal of sustainability is to ensure that future generations of humans have opportunities to thrive on planet Earth. Debates about the ethics of technology often focus on how to protect human rights and promote human autonomy.

At the same time, some conversations about the environment and technology are now taking humans out of the equation. As Adam Kirsch points out in a new book, “The Revolt Against Humanity: Imagining a Future Without Us,” people in two very different schools of thought are coming to a similar conclusion: that the world might not have people much longer and might be better off as a result.

Kirsch takes readers on a guided tour of the discussions in these two camps. “Antihumanists” are obsessed with our having sown the seeds of our demise and bringing environmental apocalypse upon ourselves — possibly even deserving to go extinct. “Transhumanists” are obsessed with maintaining control and envision a future in which we use technology to become something greater than homo sapiens and even cheat death itself.

Kirsch is a poet, literary critic, and editor at The Wall Street Journal’s weekend Review section. Our conversation has been edited and condensed.

What’s the difference between mainstream environmentalism and the version espoused by radical antihumanist philosophers and activists?

When we think of environmentalism, we usually think of people who want to save nature both for nature’s sake and for our sake — that living on a planet with radical climate change, or one that’s lost biodiversity, is bad for humanity.

The more interesting fringe argument says, “What if humanity is, by definition, essentially destructive of nature?” If humanity gets bigger and more powerful, we’ll keep creating more environmental damage, and the only way to stop that would be to end the human race.

There are only a few people who say humans should become extinct. Some predict that we will. Others say our current industrial civilization can’t be sustained and we’ll return to a more primitive agricultural or hunter-gatherer lifestyle. These are all different points of view in this universe. What they have in common is the idea that we can’t keep going the way we are.

What do these people think about the end of art, philosophy, and other valuable, distinctly human activities?

As human beings, we tend to think that the world exists for us to behold, think about, understand, and master. But some of these thinkers argue that nothing irreplaceable would be lost without us. The universe doesn’t need philosophy or any particular human modes of making sense of it. Other creatures and forms of existence can make sense within their ways, and there’s no absolute preference for humanity. There will be rocks, wind, and oceans. Nothing indispensable would be lost if humans disappeared.

From this perspective, is going extinct our comeuppance?

There’s definitely a moralistic element of this, because only humans have moral categories. And so there’s something very paradoxical about saying that humanity should disappear. But I understand how people reach this conclusion, because they see us as immoral. Paradoxically, you could say that the most humanistic thing would be to call for humanity to disappear because that’s the only way to achieve real justice — not among human beings but for every living thing.

How extreme are these people willing to go? Are they advocating violence?

A radical ecologist named Paul Kingsnorth talks about Ted Kaczynski’s famous Unabomber manifesto about the dangers of technology and says he agrees that humanity is destructive and our civilization is wrong. That doesn’t mean he is condoning violence or terrorism. I don’t think anyone is saying we should blow everyone up or sterilize everyone. But there are definitely people saying we should choose voluntarily not to have children or have fewer children. And there’s a philosopher named David Benatar who calls himself an antinatalist. He says that it’s better to have people not be born than to be born. And he talks about how it will be bad to be alive when humanity is disappearing, but once humanity is gone, it will be better.

Why do fringe views like these matter?

Not many make this a platform for their action or writing. But the idea that the existence of humanity is a problem has a growing, intuitive appeal to many people who do not believe in traditional religion or humanism and have impulses towards a kind of radical idealism. The ultimate sacrifice you could make for a cause would be to sacrifice yourself.

Do you find any of these arguments convincing?

I don’t embrace these arguments. I’m trying to understand them and lay out what people are saying and thinking. I can’t imagine feeling that there are things more important than humanity or that the universe without humanity would be just as worthwhile. I find that very hard to grasp, especially because I’m a writer and a poet, and the things that I’m committed to in life are literature and ideas, which are all very human pursuits.

Adam Kirsch is the author of "The Revolt Against Humanity: Imagining a Future Without Us."Miranda Sita

Now to the other group of people you write about: transhumanists. How do they want us to respond to existential threats like global warming?

One way to solve the problem is to end humanity. The transhumanist solution is to go beyond it. These people say that by using technology, we can either alter ourselves or create new forms of life that are superior to humanity. That will probably mean that homo sapiens as we know it die out or are radically reduced in terms of our power and abilities to shape the world. Our successors will be a form of life that lives much longer, has much greater powers, and is capable of different kinds of experience — possibly artificial minds that are not biological. Transhumanists love this idea because it’s a way to solve things that humans can’t solve on their own, like interstellar travel.

Many transhumanists start from the premise that the most important thing in the universe is mind, and the only kind of mind we’re aware of right now is the human mind. They say it’s our responsibility to do something to preserve mind even after humanity disappears.

Do transhumanists accept limits on mental and physical enhancement?

There’s a strong consensus among transhumanists that enhancement is good and no one has the right to stop you from enhancing yourself. Nobody has the right for religious, legal, or moral reasons to say you can’t live for 1,000 years or have 10 arms if you want to. There’s no reason not to enhance, and many good reasons to do it. For instance, you would be able to avoid getting sick and find new sources of pleasure, and your senses would become much keener.

Is the goal of transhumanists to enable everyone or just a select few to transcend their humanity?

There’s definitely a lot of thought in this community about how a transhumanist sort of leap forward would take place. Would it take place through the free market? Would it mean a few very rich people start modifying themselves in ways that exempt them from death or illness? I think that there are different views about this and that most people come down on the idea that it’s so important that it happens that it would be better for it to happen for a minority than not to happen at all.

Are transhumanists concerned about creating AI that will extinguish future versions of ourselves?

One scenario is that someone intentionally or accidentally manages to create an artificial intelligence that is so capable that it can break free of human control and start to develop itself in ways that we can’t understand or stop. Maybe something so much wiser that we would not be able to interfere with it, and if it decides that we should stop existing, we wouldn’t be able to resist and possibly wouldn’t have the right to.

A scene from Stanley Kubrick's 1968 film "2001: A Space Odyssey," which also wrestled with AI and humanity's long-term fate.AP

Do you find any transhumanist beliefs persuasive?

I’m in my mid-40s. Some people think there are people alive today who will see the total transformation of humanity and will never die. I don’t think that’s true. I think I will have a normal life span and die like everyone else in the history of humanity. But 100 years from now, I don’t know. I don’t think it’s impossible.

I’m open to the transhumanist argument that the essence of humanity is projecting ourselves forward, that we’ve always been trying to make ourselves more able, capable, and powerful by using technology, and that you can’t draw the line at one point and say: This far and no further.

What’s the main takeaway you got from highlighting the counterintuitive connections between antihumanists and transhumanists?

These ideas might become important, even if they don’t come true. We’re already seeing some of this in politics and culture. Some people want to shrink humanity’s footprint — have lower birth rates, use less energy, and reduce our burden on the planet. Others, probably more people, continue to believe in old-fashioned ways — that it’s good to have children, more prosperity, and offer growth to the whole planet to raise everyone’s standard of living. In the next generation, the division could grow and be something like a radicalization of the current liberal-conservative divide.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology, an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity, and a scholar in residence at the Surveillance Technology Oversight Project. Follow him on Twitter @evanselinger.