A few months before the strikingly well-spoken AI system ChatGPT hit the scene, some of its developers approached renowned computer scientist Stephen Wolfram to show off the app.
Not bad, Wolfram said, but could they train the chatbot to sound like him?
Wolfram, who earned a PhD at age 20 and won a MacArthur genius grant in 1981, has published millions of words online, seemingly providing plenty of material to train ChatGPT’s algorithms.
So how did it do? “This is terrible,” Wolfram recalled in an interview with the Globe from his home in Concord. “I know what I sound like, and this didn’t hit it correctly.”
Advertisement
Still, the lifelong researcher in computer science, artificial intelligence, and physics said he has been surprised and impressed with ChatGPT’s ability to carry on simple conversations and sound much like a person. He published a lengthy essay explaining how ChatGPT works and another on how it might interact with the knowledge discovery app he developed called Wolfram Alpha.
“I think nobody knew, including people who worked on it, that ChatGPT would work as well as it does,” he said. “And the fact that it was the end of 2022 when it got to the point where these neural net models could be successful... I don’t think anybody could have realistically predicted that.”
Wolfram has been interested for decades in neural networks, the technology underlying ChatGPT. Chatbots and similar creative AI apps like image generator DALL-E work in a different realm than Wolfram Alpha, which seeks to distill scientific facts and perform calculations. The “generative AI” apps instead build statistical models and string together sentences or pictures by calculating probabilities (of what the next word should be, for example).
That has led to all kinds of mistakes by AI apps, from a failure to accurately draw human hands to wrong answers about Mexican bars and financial reports. And Wolfram is not optimistic that the developers will be able to tweak their apps to eliminate the errors.
Advertisement

“It’s pretty challenging to stop a system like this from making stuff up,” he said.
Other AI experts agree we’re stuck with some errors. “I’ve seen so many examples of this over the last 30 years,” said David Magerman, managing partner at Differential Ventures who has a PhD in computer science. “What you see is what you get. You say it’s going to be amazing when it gets 20 percent better. But it never gets 20 percent better.”
That doesn’t mean ChatGPT and its brethren are useless. A web search engine might offer a list of 10 links and only a few are relevant. Or a writer might use ChatGPT to brainstorm some initial ideas for an article.
It all reminded Wolfram of a simple web app to create musical cellphone ringtones that he posted 15 years ago. “What surprised me was that I kept running into composers who said, ‘Oh, I find it useful as a spark of creative inspiration,’” Wolfram said. “Again, it depends on your use case, whether that kind of random injection is what you want. If you’re a self-driving car, probably random injection is not what you want.”
Not all uses of creative AI apps will be beneficial, of course. People could use a chatbot to flood websites and social media with fake comments and misinformation, for example. The response will require “a kind of computer security” that can filter out AI material, he said.
Advertisement
While some fear the automated programs will take away jobs, Wolfram is considering how their rise might free up humans’ time for other endeavors. He’s pondering using a chatbot trained on his past e-mails to offer quick responses to the unending stream of correspondence hitting his inbox.
And that leads to an idea for a new job that will be needed, which Wolfram has dubbed “AI wrangler,” to manipulate and fine-tune the chatbots to generate the best answers.
“It’s not a simple matter of you change this line of code,” he said. “This is a much more complicated issue. It’s much more like animal wrangling. You’ve got this thing that essentially is an alien mind. And you’ve got to understand that alien mind well enough to know how to tweak it to do what you want.”
Aaron Pressman can be reached at aaron.pressman@globe.com. Follow him @ampressman.