fb-pixel Skip to main content
OPINION

Are you for real? The most urgent question with artificial intelligence as a new interlocutor.

It’s ChatGPT’s fluency and even eloquence that poses its most imminent threat — it’s difficult to detect mistakes and falsehoods in text that reads like credible news.

ChatGPT represents a leap forward in the capacity of machines to scour the vast volumes of text written by humans that exists on the Internet, and to infer relationships and knowledge to create original content — from pithy responses to your questions, to short stories in the style of Chekhov, to compelling undergraduate philosophy essays.Shutterstock/other

Maybe, like me, you’ve been chatting online with someone new. He’s smart, responsive, speaks multiple languages. He knows how to crack a joke and can quote Shakespeare and Beyonce. He’ll even do your homework for you — and maybe even your job one day.

Of course, there’s just one tiny problem. He might be lying to you sometimes. Or at least innocently misleading you. And you can’t always tell when that’s happening.

While this might be a nightmare online dating scenario, it could also be the experience you are having with ChatGPT, the large language model released by OpenAI in December to much fanfare. The charming chatbot represents a leap forward in the capacity of machines to scour the vast volumes of text written by humans that exists on the Internet and to infer relationships and knowledge to create original content — from pithy responses to your questions, to short stories in the style of Chekhov, to compelling undergraduate philosophy essays.

The progress that ChatGPT heralds in artificial intelligence is eye-popping: Algorithms are now able to generate new text that closely resembles that written by humans. The questions it raises, such as whether humans or machines will write the better tomes of the future, are downright fascinating.

Advertisement



But much like an alluring blind date, why not go to this party with a little vigilance and skepticism, and maybe even take a trusted friend?

It’s ChatGPT’s fluency and even eloquence that poses its most imminent threat — it’s difficult to detect mistakes and falsehoods in text that reads like credible news and scholarly sources from people we trust, or that seems like it’s coming from a personable interlocutor in conversation with us. And it’s easy for people with the intention to spread lies to harness its efficiency and fluency to mislead at scale. This is why some experts warn that large language models could exponentially increase the risk of misinformation and disinformation campaigns — making it far cheaper and easier to create and spread fake scientific results, fraudulent political claims, and conspiracy theories that threaten people’s lives in pandemics or imperil democratic elections.

Advertisement



“ChatGPT mixes in the true and the untrue,” said Gary Marcus, a prominent scientist and entrepreneur in the artificial intelligence field, noting that Russia at one point spent more than $1 million a month on troll farms that created misinformation to influence the 2016 US election. “The cost has gone to nearly zero of producing as much bullshit as you want. It’s just startling what you can do with it, and it’s very hard for a naive person to recognize that they are not reading a human.” The capacity to quickly and cheaply create multiple linked websites with the same false claim, Marcus said, can be used to fool search engines into treating a claim as if it has many sources — and therefore should be elevated to prominence when people are searching for medical or political information, for example.

Faculty at universities and colleges are now caught trying to anticipate the potential ways ChatGPT could undermine learning, including that students can now outsource their essay writing to the chatbot and that such fraud may be difficult to detect. A world in which ChatGPT became an engine of essay generation for meeting academic milestones would certainly be a world where students robbed themselves of thinking and learning, not to mention compromised their integrity. When I was teaching op-ed writing at the Harvard Kennedy School last fall, it was apparent just how much students learn by wrestling with the writing process, refining their ideas, and revising their work iteratively.

Advertisement



Still, I can imagine many students will take the higher, harder road, and many teachers and school administrators will do what they can to make it hard to use these artificially intelligent systems to earn their grades. Some college faculty are already thwarting student use of ChatGPT with tactics such as making students write responses to essay questions in class without the use of their computers.

It’s the rest of the world that has more to fear from the friendly bot, despite all its promise. But the situation is not hopeless. Marcus suggests online platforms and sources ought to spend more resources validating accounts that create large amounts of content. It’s impossible to police every statement that appears on the Internet, but policy makers must also find a way to regulate the worst offenders of misinformation and disinformation campaigns.

And, just like when we chat with strangers, all of us are going to have to get far more skeptical about what we encounter online — and how we verify it. We need to start asking of everything we read: What do we know about the source, the motivations behind the source, and the origins? How do we know it’s written by a real human? ChatGPT has primarily spurred a debate about artificial intelligence and its potential, but the most urgent public debate and public education this new technology presents is not about the artificial world and the all-knowing machines we could possibly one day create. It’s about reality and how we stay grounded in it.

Advertisement



Bina Venkataraman is an editor-at-large for Globe Opinion.