The message from OpenAI founder Sam Altman appeared on Twitter on Nov. 30, 2022: “Today we launched ChatGPT. try talking with it here: chat.openai.com.”
And just like that, the world changed.
Though artificial intelligence programs have been around for years, ChatGPT is something else — a “generative” AI system that seems capable of original thought (emphasis on “seems”). With a few typed commands, anybody can use it to crank out essays, poems, images, and even computer software with humanlike sophistication.
ChatGPT became one of the fastest-growing online applications ever. And one year later, it attracts about 100 million users each week.
Even the recent palace coup at OpenAI, in which Altman was fired as chief executive but rehired days later, will probably have little impact on the popularity of ChatGPT. There’s speculation that the move was inspired by fears that he was too quick to release generative AI services into a world not yet prepared to use them safely.
But the dam has already burst. Similar AI programs like Bard, Midjourney, and Stable Diffusion have also signed up millions of users. And billionaire Elon Musk is about to launch another generative AI system called Grok. In all, the generative AI boom is the biggest in digital tech since Apple’s iPhone ignited the smartphone market in 2007.
“It’s only been a year, and I feel like it’s changed the public conversation about science and technology,” said Tim Ritchie, president of the Museum of Science in Boston. “I’m not sure I’ve seen anything unleashed that has made such a big difference so quickly.”
And yet, one year isn’t nearly long enough to answer the big questions about AI. It’ll take much longer to understand its real impact, its astounding risks, its vast capacity for error, and the downsides of relying on machines to do our thinking and communicating for us. What’s more, the new AI applications haven’t even had that much impact yet. But the same was true of the iPhone’s early days: It took a few years for smartphones to become indispensable.
So where do things stand?
People are finding practical applications for AI systems, with new ones arriving every day. There are the obvious ones, like corporate memos and legal briefs generated automatically from a handful of notes, suggested travel itineraries for specific destinations, or elegant illustrations created in seconds by people without a scrap of artistic talent. There’s also the prospect that AI will enable anybody to write powerful software apps, just by asking.
“What you’re going to see is the ability of more and more nontechnical people to become software developers without even knowing it,” said Bret Swanson, a fellow at the American Enterprise Institute. “I can essentially tell a computer what I want it to do in my normal voice.”
There are other uses as well. Next time you need to step away from a teleconference, AI might bail you out. Jeetu Patel, an executive vice president at telecom giant Cisco, said his company’s Webex video conferencing system knows when you go off-camera. The system tracks what’s said while you’re away and displays an on-screen summary to help you catch up. Patel said the same AI technology can generate summaries of messages in a user’s voice mailbox, or edit a seven-minute corporate marketing video down to a 30-second highlight reel.
AI is even learning to play instruments. Google’s DeepMind lab earlier this month demonstrated a tool that lets users compose music merely by humming. The software can replay the tune using realistic audio synthesis that can feature a single instrument or an entire orchestra.
Amid all the hype, business leaders are trying to figure out how best to use AI technology and account for its myriad effects.
“Everyone is recognizing that AI can have an impact on their business, and they’re just wondering exactly how,” said Daniela Rus, director of the MIT Computer Science and Artificial Intelligence Lab. She compared it to the “digital transformation” that occurred as businesses embraced the internet.
But the backlash against generative AI has been remarkable. Six months after ChatGPT’s launch — amid other alarm bells — prominent scientists warned that AI could soon prove too powerful for humans to control, and become as dangerous as “other societal-scale risks such as pandemics and nuclear war.”
Many ridiculed this claim, but there’s good reason to worry about less catastrophic threats. Educators worry that students are using AI to complete assignments and avoid the hard work of learning. Data security experts say that cybercriminals can use AI to launch online fraud campaigns on a massive scale, and hostile nations could serve up vast quantities of disinformation through social media networks.
And of course, machines that can emulate humans could be career-killers for many workers. Movie and television actors and screenwriters went on strike against AI earlier this year. They forced Hollywood production companies to accept strict limits on using AI to write screenplays or replacing actors with digital simulations.
But commercial artists and illustrators haven’t been as lucky, said Scott Nash, executive director and founder of the Illustration Institute. Because of competition from AI-generated art, “the young artists I know are not getting paid what they’re worth,” Nash said. “They’re getting paid what we were getting paid in the 1980s.”
University of Chicago computer scientist Ben Zhao said that many artists can’t find work at all. “All the best people who do this are losing their jobs,” he said. To make matters worse, generative AI programs are trained on human artworks with no compensation to the original artists.
In response to such concerns, the nonprofit Responsible Innovation Labs is drafting voluntary guidelines for AI startup companies. These companies will commit to understanding the risks of the AI systems they develop. They’ll promise to secure permission to use the intellectual property of others for training purposes. They’ll also pledge to test their systems to ferret out security flaws and identify biases that could cause AIs to produce results that discriminate on the basis of race or gender.
The goal, said executive director Gaurab Bansal, is an AI ecosystem where concern for the technology’s social impact is built in at the beginning. “It’s very hard to retrofit a company for responsibility,” Bansal said.
Governments worldwide have more aggressive policies in mind. The European Union is putting the finishing touches on AI regulations it’s been considering since 2021. The EU plan would require companies to reveal the kind of data used to train AI systems, and all AI-generated materials would have to be identified as such.
The Biden administration last month issued an executive order calling for stricter regulation of AI systems. The order would force developers of AI “that poses a serious risk to national security, national economic security, or national public health and safety” to report their activities to federal regulators.
It’s an open question whether Biden can enforce his plan without action from Congress. But the sheer speed of the administration’s response is revealing.
For now, AI experts and observers remain optimistic but vigilant.
MIT’s Rus said she foresees “a future where generative AI is not just a technological marvel, but a force for hope and a force for good.”
Ritchie, from the Museum of Science, puts the onus on all of us. “The thing that hasn’t changed is human nature,” he said. “AI will only be as good as what humans put into it.”