The advent of widely accessible, generative artificial intelligence platforms gives everybody “superpowers,” in Daniela Rus’s view.
“We can become more productive, we get speed, we get knowledge, we get insight, we get creativity, we get foresight, we get mastery, we get empathy,“ said Rus, the director of the Computer Science and Artificial Intelligence Laboratory at MIT, during a Globe Summit panel focused on AI and industry. “Of course, all these tools that can be used for good can also be used for bad by supervillains.”
This balancing act was on full display at several of the Globe Summit’s AI-focused sessions on Tuesday. Close to a year after ChatGPT’s splashy entrance onto the global stage, researchers and industry leaders are still weighing the benefits of generative AI (reduced workload, higher productivity, personalized services) against its downsides (rampant disinformation, harmful biases, job displacement).
The stakes, panelists agreed, could not be higher.
“You can’t uninvent the bomb,” said Joan Donovan, an assistant professor of journalism and emerging media studies at Boston University who studies disinformation, during a panel on AI and fake news. “You have to reckon with it now that it’s arrived.”
The path to mitigate AI’s perils, both for industries and for society at large, lies in human intervention, panelists said. For Marc Succi, a physician and strategic innovation leader at Mass General Brigham, this looks like rolling out “low-risk” AI tools “primarily designed to augment the ability of the doctor to do their jobs,” he said. For example, he said, MGB is beginning to deploy a system that can take a recording of an interview with a patient and distill it into patient notes — a process that can take physicians hours to do manually.
“That decreases burnout, increases patient access, and the people who run the hospitals will be happy about that,” Succi said.
But there can’t be true human intervention without diversity, said Rumman Chowdhury, a responsible AI fellow at Harvard University’s Berkman Klein Center. A wide range of lived experiences is necessary to pinpoint AI’s prejudices, such as racial or gender biases, she said during a panel on AI’s diversity problem — adding that these baked-in biases are far more likely to cause real harm than doomsday scenarios.
“It is a distraction to focus on things like, ‘Will AI come alive and kill us?’” she said. “For some people, it’s very safe to be in a speculative space, because you are not tasked with solving a real problem.”
Marzyeh Ghassemi, an assistant professor at MIT, said federal regulatory structures are crucial as AI continues to be developed, likening it to safety restrictions on vehicles.
“When you’re trying to develop the windshield wiper or a better steering system, that’s fine, nobody is saying that’s dangerous for a car,” she said. “When you’re actually driving a car and people are using it on a highway, you need to know how it’s going to be used ... What is the speed limit? How do we make sure people are responsible when they’re engaging with it?”
In terms of workplaces, Paul English, an entrepreneur and the founder of Boston Venture Studio, said that while there is potential for AI to reduce workloads and increase productivity, he also believes it will displace some workers.
”I’m hoping that as with every prior tech evolution, the industrial revolution and onward, that while some jobs will be automated, more jobs will be created,” he said, adding that he believes specific industries, like health care, will develop their own dedicated AI systems rather than using a catch-all like ChatGPT.
While panelists agreed that humans must oversee AI, several also noted that we are fast approaching the day when we will have to use it in order to gain a professional edge. This week, English said, he received an application for a marketing internship from a woman who disclosed she used AI to partially write her résumé.
“I called her immediately,” he said. “Because someone who admits that — they’re trying to develop a superpower by using AI in their career — I want to talk to that person.”