scorecardresearch Skip to main content
IDEAS

How to use ChatGPT to apologize

It’s hard to say you’re sorry. Under the right circumstances, it’s OK to ask a computer for help.

Memphis Grizzlies guard Ja Morant issued an apology this year that some fans suspected was written by ChatGPT.Brandon Dill/Associated Press

ChatGPT is constantly apologizing. That’s because the AI gets many things wrong, misunderstands requests, doesn’t always do what we want, and sometimes offers incomplete information. Of course, a computer program can’t feel embarrassed (or anything), and ChatGPT is only programmed to simulate a polite desire to please. Faking care is how machines gain our trust.

Given that humans can sincerely care about one another, should we ever use ChatGPT to apologize? Consider the controversy surrounding Memphis Grizzlies star Ja Morant. After two videos appeared with him flashing a gun, Morant issued an apology that seemed as if it was written by ChatGPT. Critics found it insincere.

Advertisement



If Morant did, in fact, use ChatGPT, he didn’t do anything wrong. It’s naive to expect him to offer an authentic apology. When stars publicly say mea culpa to an impersonal group (“I know I’ve disappointed a lot of people”) after falling short of being a role model, they’re only dealing with one thing: optics. Whether they turn to a publicist or an AI-powered app doesn’t matter. Ghostwriting is a standard PR strategy, and celebrities aren’t ethically obligated to be more genuine than ChatGPT.

People have wrongly demonized ChatGPT users many other times. Earlier this year, in response to a shooting at Michigan State University, the Office of Equity, Diversity, and Inclusion at Vanderbilt University’s Peabody College of Education and Human Development sent out an e-mail about the importance of inclusivity on campuses. The note included the following footnote: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.” Students rejected the bot-coached writing as too robotic. After receiving blowback, an associate dean expressed regret, saying that getting algorithmic help was “poor judgment” that “contradicted the values” of the college.

The administrators shouldn’t have had to apologize, because turning to ChatGPT in this context wasn’t cheating at caring. The school administrators were addressing a broad group (“Dear Peabody Family”) with a professional public service announcement (“come together as a community”). Yes, sensitive topics were involved. But this wasn’t an intimate interpersonal communication. It would have been equally appropriate to run the prose past legal counsel and a PR firm.

Advertisement



On the other hand, we have deep reservations about supposedly “smart” technology making people robotic. Now that machines can pass the Turing Test by holding conversations, it would be a shame for humans to fail what, in “Re-Engineering Humanity,” we call a Reverse Turing Test by behaving indistinguishably from machines. So even if it’s OK to crib from a bot in contexts like performative celebrity brand management, it’s bad form to be thoughtlessly predictable in interpersonal communications with people with whom we have genuine relationships.

That said, there are good ways to use the technology to help you craft meaningful apologies.

Apologizing requires more than just saying sorry. In “I Was Wrong: The Meaning of Apologies,” University of New Hampshire philosophy professor Nick Smith argues that a meaningful apology has many elements. For example, you have to have the right intention — to genuinely care about the person you’ve harmed. You also need to be clear about how you hurt the person, which includes acknowledging your mistakes and why they matter. And when you’re conveying this information, you need to respect the dignity of the person you’re apologizing to and display restorative behavior, like suggesting ways to make up for the damage you caused.

Advertisement



Smith’s criteria don’t apply to every possible apology, but they reveal the pitfalls of using tools like ChatGPT to generate one-click apologies. Chances are that would too quickly shortcut the deliberative engagement required.

However, consider a situation in which you’re unsure why someone wants an apology and suspect that asking them about it will only worsen a bad situation. Here’s a hypothetical scenario that we entered into ChatGPT about a friend who feels neglected.

Evan Selinger

As a preliminary step, this could be helpful. Effectively drawing from all the internet content it’s processed, ChatGPT offered reasonable basic insights. Quality results are not guaranteed. It’s important to always be on your guard for generative AI to BS you with sensible-sounding nonsense. But in this case, ChatGPT’s output is a bit like a “Dear Abby”-style newspaper column, a wiki on how to apologize, or a polished version of what one might find in countless threads on Reddit and other social media platforms. It also resembles the commonsensical advice one might get from wise elders or family or friends. Not groundbreaking, but because ChatGPT expands the corpus of social knowledge from which one might draw, this can be a critically important resource for many people. After all, we’re living in an age where people find apologizing so difficult they sometimes prefer “ghosting” — disappearing from your life without offering any explanation.

Advertisement



The trick is to use ChatGPT to augment your own capabilities and not go overboard by outsourcing essential opportunities to learn, develop, and practice being a good human. One way to do this is to actively and deliberatively compare ChatGPT’s output with guidance from other sources.

Querying ChatGPT, comparing it with other sources, and doing some reflection may seem like a lot of work. That’s as it should be. A parroted apology isn’t a real apology; it’s a cheap shortcut.

And perhaps future versions of generative AI programs can be designed with this in mind. Someday they could lessen your burdens and help you be more reflective. They could cite trustworthy sources and ask you to consider important questions — rather than just serving as prompt-answering machines.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology and an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity. Brett Frischmann is a professor of law, business, and economics at Villanova University’s Charles Widger School of Law.