Dear ChatGPT,
Wow, congratulations on becoming the fastest-growing app ever — 100 million users in two months? Very impressive.
And I can see why. You’re clever at crafting stories, you’re quick to marshal facts, you sound more like a person than any bot I’ve ever seen. You may have a future answering e-mails, prompting new ideas for articles, or even dreaming up bedtime stories. That’s already happening.
But you’re also making me nervous. Like, very nervous. Sometimes your facts are made up. At times, you’ve gotten insulting and even threatening. And now I’m worried you and other conversational AI systems will be put to nefarious purposes, helping kids cheat in school, polluting social media with more convincing misinformation, or flooding public discourse with a biased point of view.
Advertisement
All of this suggests the tech industry may be moving too fast to be able to distinguish between safe uses and dangerous applications, according to a half-dozen experts in AI and security interviewed by the Globe. As evidence, look no further than how Microsoft’s public test of a version of ChatGPT in its Bing search engine went so quickly off the rails. The bottom line, these experts said, is that business leaders — along with ethicists and regulators — need to be much more careful about adding this emerging technology to every application in the world.
“We need way more scrutiny of these models because they’re getting adopted so fast,” said Rana el Kaliouby, who was a cofounder of Boston AI startup Affectiva and is now deputy chief executive of Swedish AI company Smart Eye. “This is not in the research lab anymore.”
Chatbot developers must take more care in what information they are using to train the apps and how they are applied to real world problems, she added. “We have to really put energy towards the bias and ethics part before these models get adopted at scale and are integrated into our everyday devices and technologies.”
Advertisement
One key issue: ChatGPT and other bots can sound authentic but are easily led astray. That’s because the software programs do not simply retrieve a set of facts from an established repository the way a search engine does, or use grammatical rules to build their answers. Instead, they mainly compare patterns of words seen online with the words typed by the user and then use statistical probabilities to guess which words they should say next.

Not surprisingly, some of the errors are big — and consequential. Google parent Alphabet’s stock lost $100 billion last month after a demo of its upcoming chatbot included false information about the James Webb Telescope. Microsoft’s chatbot demo for search has had numerous problems, including getting details wrong about Mexican bars and cordless vacuums. The company scaled back the app after some of its answers started to sound downright unhinged, including threatening a reporter with hacking servers and stealing nuclear launch codes.
“It learned how to make language, and language can be used to make fiction and nonfiction,” noted computer scientist Stephen Wolfram said. “And, you know, it doesn’t really distinguish between those.”
Still, Wolfram sees value in putting chatbots to work on mundane tasks like answering his voluminous e-mail. Trained on his prior correspondence, a bot might do a good job responding to basic queries. But Wolfram frets that a hostile e-mailer might manipulate the app to reveal his secrets, as well. “What if you’re going to ask it something that is worming into something I didn’t really want people to know?” he said. “Can you make the bots tell you things that the creator of the bot didn’t want you to know? That’s the next level.”
Advertisement
Even after the Bing debacle, Microsoft is moving forward with adding chatbot features to help users do searches and craft documents, albeit with added safety features. Salesforce is putting a bot it calls Einstein GPT in Slack and its marketing apps. Google’s Bard chatbot, still in beta, aims to summarize answers from online research.
Meanwhile, startups are raising money and developing new applications for marketing, communications, and gaming. Some are already using chatbots’ ability to create coherent sentences and paragraphs to help write marketing copy, children’s books, and short stories.
Novus Writer, a Boston-area startup, is using AI to help write marketing materials for clients but is adding guardrails to ensure the resulting copy is factually accurate and does not plagiarize from anything already online. “The ecosystem is moving very fast,” Egehan Asad, the company’s chief executive, said.
But however well-designed they are, conversational AI bots could be put to more nefarious uses.
Security researcher Bruce Schneier and data scientist Nathan Sanders at Harvard’s Berkman Klein Center for Internet and Society have warned of AI chatbots as a threat to the democratic process. For one, bad actors could use the bots to further flood social media with disinformation. Going a step further, they said, chatbots could be used to overwhelm congressional offices or regulatory agencies with fake but hard-to-detect advocacy letters. (Lobbyists who tried to flood the Federal Communications Commission with millions of fake comments about net neutrality in 2017 were undone by the uniformity of the submissions.)
Advertisement
Sanders pointed out that human lobbyists already conduct misleading campaigns at times, but AI apps could supercharge the damage. “It’s expensive to hire human lobbyists in every state of the union if you want to have an effect on state legislatures across the country, but it’s trivial to scale something like ChatGPT,” he said.

There’s not a lot of evidence of chatbots being used in the legislative process so far. But someone did use ChatGPT to generate a comment opposing the piece Schneier and Sanders wrote for The New York Times.
“When should we be worried about AI influencing the political system?” Sanders said. “Maybe when AI has a comment published in The New York Times saying that we shouldn’t regulate AI.”
Chatbots can also be trained to imitate the speaking style of particular publications or individuals. That opens another avenue for both useful and criminal applications. A chatbot trained on a user’s own material could answer e-mails (as in Wolfram’s example), make appointments, or perhaps argue with bill collectors. Eventually, AI could perform more complicated tasks and perhaps even be used to generate income in a virtual reality setting, according to el Kaliouby.
Advertisement
“A virtual version of you could unlock this virtual human economy,” she said. “I could send my virtual human to have this interview with you while I’m chilling at some beach somewhere.”
But crooks have already used AI apps to imitate the voice of executives and steal from corporate coffers. Adding chatbots to the mix could magnify the criminal possibilities to include mass-scale but personalized manipulations, spreading false stock tips to manipulate a share price, or defaming a political candidate with false accusations across the Internet.
The biggest fear of all is straight out of Hollywood — that sentient AI programs will try to take over the world and wipe out humanity. One reporter managed to get Microsoft’s Bing chatbot to say it fantasized about hacking, creating a deadly virus, and stealing nuclear weapon launch codes. But those sentences were likely just drawing on sci-fi stories about AI found online, Brian Smith, a Boston College computer scientist and associate dean for research, pointed out.
“Let’s calm down. Sometimes my Roomba vacuum gets stuck at the bottom of the stairs,” he said. “So when someone is talking about the AI apocalypse, I’m not sure it’s quite up to that.”
Aaron Pressman can be reached at aaron.pressman@globe.com. Follow him @ampressman.