As a new school year begins, I wanted to check in with Sally Kornbluth, who took the reins of the Massachusetts Institute of Technology in January.
A cell biologist by training, Kornbluth has been on a listening tour of the MIT community after spending nearly three decades at Duke University, most recently as its provost. Now it was my turn to listen as she appeared as a guest on my “Say More” podcast, which drops every Thursday.
One theme dominated our conversation: What role does MIT play in advancing the most powerful form of artificial intelligence while also making sure it doesn’t run amok?
In many ways, generative AI is this generation’s nuclear technology, sparking a debate among scientists on how to proceed. That’s happening at MIT after physics professor Max Tegmark spearheaded an open letter in March to tech giants asking them to hit the pause button on big AI experiments. The letter has garnered more than 33,700 signatories, including other MIT faculty.
“I don’t agree that a pause is going to be effective,” she said. “I don’t think it’s realistic at this point, particularly with large commercial enterprises having a big interest. That horse has left the barn some time ago, but I entirely defend the right of individuals to think about that, talk about it, request it.”
Kornbluth was also among throngs of movie-goers who this summer watched “Oppenheimer,” the blockbuster biopic about J. Robert Oppenheimer known as “the father of the atomic bomb.”
So did the movie make her think about her role as a leader of scientists and what kind of responsibility she has?
“It’s easy for scientists to get caught up in the incredible enthusiasm and passion for their work,” said Kornbluth. “The movie depicted quite well the sudden realization of the repercussions of this work after the bomb had been dropped. You have to think upfront about the social and ethical implications of your work.”
That’s something top of mind at MIT in the age of generative AI. Kornbluth mentioned how recently she put a call out to faculty asking for proposals on how to develop guidelines, policy recommendations, and other actions to help industry leaders, academic institutions, and policymakers understand the impact of AI.
Still, Kornbluth counts herself among those optimistic about our AI future.
“I’m not in the doomsayer camp at all. I don’t have a lot of fears about it,” she said. “I think it’s overstated the degree to which its actions are really independent of human creativity.”
Take for example, ChatGPT, part of the class of powerful AI software that everyone is buzzing about.
“You could view ChatGPT as the ultimate in human assistance technology,” Kornbluth added, “Almost think about it like a bionic assistive device. It’s like your other hard drive up in your head.”
Another area Kornbluth believes MIT can — and must lead on — is climate change. She talked about the urgency of the issue during her inauguration address in the spring, and how MIT is well-positioned with about 20 percent of the faculty already working on climate.
She believes MIT – which operates at the intersection of learning and real-world application – can solve the climate crisis through policy and innovation.
“I view it as an existential issue to the extent that if we don’t take action there, all of the many, many other things that we’re working on, not that they’ll be irrelevant, but they’ll pale in comparison,” she said. “Look at what’s happened in just the U.S., never mind globally this summer, with weather events. This has the potential to impact and to derail many, many other things that are going on in society.”
Anna Kusmer of the Globe staff contributed to this article.
Shirley Leung is a Business columnist. She can be reached at firstname.lastname@example.org.