Artificial intelligence systems like ChatGPT have alarmed everyone from scientists at Google to billionaires like Elon Musk. But Boston’s city government seems mainly worried about missing out on the chance to use AI to boost worker performance.
On Thursday, Boston’s chief information officer, Santiago Garces, issued guidelines that encourage city workers to try out these AI systems in a variety of tasks, such as writing e-mails, summarizing lengthy documents, or creating original images, videos, and audio tracks.
“We want to encourage responsible experimentation and we encourage you to try these tools for yourselves to understand their potential,” said a city document explaining the guidelines, which apply to all city departments except the public school system. A spokesman said further research is needed to decide how to manage AI in schools.
Garces said he and his staff began discussing the issue not long after the popular AI system ChatGPT was opened to public use last fall. In its first two months, over 100 million people worldwide used the system, which can create original essays, poems, and even computer code, in response to simple questions from humans. Other systems based on similar technology can create realistic looking photos, drawings, or videos on command, and even compose music.
“These tools had become so pervasive and so easy to access, the thought was maybe some people in the organization were already starting to use them,” said Garces. Rather than try to stop city workers from using them, he and his team realized that AI could make them more efficient at their jobs.
For instance, a worker might compile research notes for a memo on the city’s bike lanes. But instead of writing the memo himself, he could have ChatGPT write up the notes while he does something else. The worker could save minutes or hours this way, while producing a report that might be better organized and easier to read than many humans can manage on their own.
The guidelines suggest that workers should take advantage of such opportunities. At the same time, it warns users that AI systems must be handled with care.
For instance, colleagues and citizens should be informed when a document or image was AI-generated. AI users should be careful not to include sensitive personal information, like the names or addresses of Boston residents, in the commands they give to an AI, because the system might retain that information and reveal it to unauthorized persons. And all AI-generated content must be double-checked by people, because these systems are famously prone to error.
“We want people to know they’re still responsible for the output,” said Garces.
On the whole, Garces seemed eager to put AI to work in city government. “When spreadsheets came out, we stopped doing accounting with pen and paper and 10-key adding machines,” he said. Ultimately, Garces predicted, AI systems could be equally transformative.
Hiawatha Bray can be reached at firstname.lastname@example.org. Follow him on Twitter @GlobeTechLab.