fb-pixel Skip to main content
IDEAS

What we lose when machines do the writing

Schools will surely forbid students to have AI write their papers. But we also need to make sure they understand what they’d be missing if they did outsource that work.

Truly original ideas get refined through experimentation and modification — which is to say, multiple drafts.Shutterstock/Lightspring

As a college writing teacher, I’ve been joking with colleagues for years about what will happen when artificial intelligence becomes competent enough to write student papers — and to grade them (the rest of us will head for the beach!). The idea that language-generating AI was going to change the way people write was a distant concern, until it wasn’t.

Since OpenAI made its language-generating model, GPT-3, available to the public last fall, I’ve been experimenting with it to see how close it can come to writing as well as my students. The interface is as easy as ordering coffee: You type a request in a text box and your text is generated in a few seconds. When I typed “write a paragraph about what college students lose if they let AI write their papers,” I had my paragraph in less than 10 seconds, for less than 10 cents:

Advertisement



“If college students let AI write their papers, they may miss out on important writing skills. AI cannot yet replicate the human mind, so it may not be able to capture the nuances and subtleties of writing. In addition, AI papers may not be able to accurately represent the student’s voice or style. As a result, students may end up with a paper that is less than ideal.”

Indeed, this answer is neither nuanced nor subtle, but it won’t be long before our students are able to create convincing papers in a few minutes with minimal effort — ones that do appear to be the product of critical thinking.

Schools will no doubt add turning in machine-generated papers to a list of what counts as academic misconduct, along with plagiarism and buying papers on the Internet. But I’m more interested in how we’ll convince students that just because you can outsource your writing to a machine doesn’t mean you should.

Advertisement



Let’s start by making a distinction between the spelling and grammar checkers that tinker with a writer’s product and GPT-3, which is poised to take over the writing process. As writing instructors tell their students, the writing process matters because writing is a form of thinking; we figure out what we think when we try to write about it. If a machine is doing the writing, then we are not doing the thinking.

In writing courses like mine, students draft essays, share them with instructors and peers, and learn how to respond to counterarguments. More often than not, when I read a student paper draft, I’ll find the most interesting and important point in the conclusion; the student had to write the rest of those paragraphs to figure out that point. When we write, we are working toward a deeper understanding of a text, a phenomenon, a problem, or a philosophical question. Our students won’t experience that by typing an assignment prompt into GPT-3. And we won’t benefit from the thinking that emerges as students try out and modify their ideas.

It only took me a few minutes of experimenting with GPT-3 before I was able to generate introductory paragraphs that mimic those my students might draft, on their own, today. When I asked GPT-3 to conjure up a thesis statement that contained an objection to an argument in Michael Sandel’s book “The Case Against Perfection,” it gave me this:

“One potential benefit of genetic engineering is that it could create a more unified and diverse community. By allowing individuals to choose their own physical and mental traits, genetic engineering could lead to a world in which people are not judged by their appearance or abilities. This would create a more tolerant and inclusive society.”

Advertisement



On the surface, this seems like a reasonable claim. When I assign my students an article by Sandel, we always have lively class discussions about whether society would be more equitable if everyone could choose their own traits. But if one of my students drafted this paragraph, I would have questions: I would ask the student why they think that no one would judge people for the traits they had selected. And I’d ask the student why they concluded that people, when given the choice, would opt for a diverse range of traits rather than choosing to look like TV or movie stars. The student might concede that being able to choose our traits wouldn’t necessarily lead to a less judgmental society. Or the student might argue that since there is so much societal pressure to look a certain way, it would be more equitable if everyone could look that way. No matter how the student answered those questions, they would have developed a clearer and more nuanced position on the topic.

I tell my students that writing — in the classroom, in your journal, in a memo at work — is a way of bringing order to our thinking or of breaking apart that order as we challenge our ideas. We look at the evidence around us. We consider ideas we disagree with. And we try to bring a shape to it all. Sometimes my students see the process differently. They see writing a paper as a hoop they are being asked to jump through, a way for me to evaluate them and pronounce them successful or not. In other words, they see writing solely as a product. If the end point rather than the process were indeed all that mattered, then there might be good reason to turn to GPT-3. But if, as I believe is the case, we write to make sense of the world, then the risks of turning that process over to AI are much greater.

Advertisement



There are many ominous science fiction stories about what might happen if we are defeated by our own machines. But the evidence suggests that rather than being conquered by machines run amok, we’re willingly outsourcing too many processes to them, including writing. And since GPT-3 isn’t actually thinking, no one will be thinking. Perhaps the most worrying outcome is that we will lose our commitment to the idea that we ought to believe what we say and write — an idea that is already under threat from disinformation campaigns and the speed at which social media moves.

Each semester, I tell my students about the magazine editor who, upon learning that I had not checked a fact in an article I was working on, said to me, “if you’re going to put your name on something, don’t you want to know that it’s true?” I tell them there’s no point in writing a paper unless writing it helps you understand why you think what you think.

Advertisement



Would it matter if we stopped believing what we write? I asked GPT-3.

It gave me this answer: “No it does not matter if we believe what we write.”

We’ve reached the point where we can’t easily distinguish machine writing from human writing, but we shouldn’t lose sight of the huge difference between them.

Jane Rosenzweig is director of the Harvard College Writing Center and author of the Writing Hacks newsletter. Follow her on Twitter @RosenzweigJane.