Something fundamental happened a few months ago. GPT-4 was a pretty significant wake-up call for anyone working in AI and computer programming. People didn’t expect that large language models would show these emergent behaviors and do computer coding and reasoning.
But around 2016 or 2017, when I was working at DeepMind, the company that built an AI that beat a human champion at the game of Go, I started having a gut feeling that things like this were going to happen. I thought it was going to happen in 10 or 20 years, not by now, but it made me go through a disillusionment period. I started to feel like “Why am I doing any intellectual activity at all, if it’s being done better and better by a machine? What’s the point?”
The thinking mind — the part of you that says “I know something” or “I can figure this out” — gets really scared. Intellectual thought gives you a sense of security because you’ve been educated for a long time, and your family, your children, your job, your parents, your friends, evaluate you on your intellectual identity. I mean, that’s the society we live in. So when we are talking about things like artificial general intelligence, that identity is pierced. There’s a crack in that identity.
But it turns out that if you actually completely cast aside that identity, it’s super blissful.
I felt I needed to find something deeper. I meditated a lot. I experimented with psychedelics and with everything else that could modulate my mind. I went to the Himalayas, hung out with yogis, the whole gamut of things. And I realized: Even if you lose all of your intellectual pride, it’s actually not a big deal.
There are few reasons for that. One is that the whole universe is brimming with intelligence. For example, I watch ant farms as a meditation practice — for hours and hours, just staring at them — and the ants are more intelligent than I am on some dimensions. We are so stuck in our linguistic and perceptual intelligence that we forget that there is intelligence everywhere, and there are infinitely many spaces to be explored. I’m not worried about running out of things to think about. There are infinitely many problems to solve.
I think the mystical side of life will be incredibly important. The humanities are going to be incredibly important. The humanities have been de-emphasized, right? Not as many people are studying them as before. But that will flip.
We need new narratives
My company works with a lot of programmers, and I’ve already kind of made it a rule that they have access to coding assistance from GPT. All the programmers, even including me. For example, I wanted to develop a feature in some software. But instead of calling someone — which would have cost me a couple thousand dollars — I sat down and in an evening, I just whipped it up. Using GPT-4.
You tell GPT you want a system that does x, y, and z and it’ll write you initial code. You run that code and it’ll give you an error. So you paste the error into GPT and say, “Fix it.” And it’ll fix the error. And then you say, “Oh, it actually didn’t do this thing so well; this was wrong.” Then it’ll say, “Oh, I apologize” — it’s too polite sometimes — and it’ll basically make some changes and rewrite the code. This works for a lot of types of code that most people write — for data handling, processing, moving things around. It’s almost like you’re iterating with it, back and forth. I’m able to get more done.
I don’t think it’s going to entirely replace the programmer’s job — I think if anything, we’ll be able to create massive things that are impossible now. We might build very complex software that has been incredibly hard to build because you would need a lot of teams and a lot of people. For example, I think with generative AI anyone will be able to create stories and interactive worlds without having art or coding skills. You might just see smaller groups of people creating lots of value. Like groups of five, 10 people, doing things that 100 people used to do. Even individual entrepreneurs will be able to create a lot. I think what we’re going to be able to create is a Cambrian explosion of coding.
But I think this year and in the year after, there’s going be a pretty big “aha” moment, across society. Coding is only one fraction of it. Conceptual artists? I think their whole world has changed in the last year. Writing? You’re very familiar with this. I mean, these things are passing bar exams and GREs. It’s getting a better score than I did.
So if you take all these things together and just roll forward a few months or years, I’m pretty sure people will go through what I have gone through mentally. There will be a lot of existential dread. People will fight that.
That means we are going to need new narratives. We need to think of these AI tools like “OK, they’re doing many things that I used to do, but there are many things that they can’t do. So how do I connect it to the human side of me?” If you can’t take pride in writing a piece of code when an AI can write it much better, then you’ll have to think at a much higher level. We can either embrace the freedom AI will offer us or continue doing things in existing ways. Once we get over the chaotic period of this rapid change, we will experience the lightness of sharing our burdens with AI.
My daughter is 3 years old. Whenever I go on these weird crusades, whether it’s meditating or going into the mountains, I take her. I also show her AI images all the time, cartoons and stuff. So I will try to show her what AI is doing, and also the flip side: Assume that all intellectual activity is dead but also assume there are infinities of explorations to be done. So now, how do you develop a psychological identity in that world? Not identifying yourself with things that technology is going to do by the time you’re 30 is a good idea, because most things are going to change.
At least, that’s how I’m thinking about it.
Tejas Kulkarni is cofounder and CEO of Common Sense Machines, based in Cambridge. Brian Bergstein is editor of Globe Ideas.