fb-pixelDoes chat gpt plagiarize? A Harvard student experiment reveals all Skip to main content

Harvard student’s ChatGPT experiment reveals disruption in store for higher education

Students in professor Wesley J. Wildman’s Data, Society, and Ethics class at Boston University developed a policy for using artificial intelligence, including ChatGPT, this semester.Jonathan Wiggs/Globe Staff

Maya Bodnick’s first year at Harvard University was marked by the proliferation of generative artificial intelligence tools, most notably ChatGPT, which was quickly adopted by college students in recent months to assist with, or even cheat on, homework assignments.

Witnessing her peers reach for ChatGPT to study or to help with essays alarmed the rising second-year student, who worried about the potential loss of critical thinking and writing skills if students rely too heavily on the technology to do their work. That concern led her to create an experiment that involved finding eight professors and teaching assistants willing to grade essays written by the tool (the professors were told they were either written by Bodnick or ChatGPT).


The graded essays revealed a stunning reality: ChatGPT could pass Harvard classes.

“It got a 3.34 GPA, mostly As and Bs, one C,” Bodnick said in an interview. “That’s not a great GPA for Harvard, but it’s well above a passing grade.”

The experiment reveals a major challenge for educators scrambling to keep up with the fast-pace adoption of a tool that is hard to detect and capable of composing college-level prose. Professors are now reconfiguring course assignments and creating guidelines for students on when and how they can use ChatGPT, and when it crosses the line into cheating. Bodnick’s experiment also underscores the threat artificial intelligence poses to scores of entry-level jobs in consulting, banking, journalism, and software engineering that graduates of top tier schools have prepared for and historically snatched up with ease.

“Ever-smarter AI tools are hitting our universities like a tsunami,” said Max Tegmark, a professor researching artificial intelligence at MIT. “Our greatest challenge isn’t testing our students, but figuring out what skills to teach them that will still be economically valuable by the time they graduate.”


Artificial intelligence “raises fundamental and existential questions about what an education looks like,” said Paul LeBlanc, president of Southern New Hampshire University. LeBlanc said that a large retailer recently told him that the company is not going hire software engineers anymore, instead relying on artificial intelligence to produce code faster than humans could. Still, LeBlanc sees the looming disruption as an opportunity to reprioritize what he calls “human-centered jobs,” including teachers, counselors, social workers, and health care professionals.

“We just don’t like to pay for [these jobs] in America,” LeBlanc said, adding that the country must rebuild its mental health system, fix its criminal justice system, and create a robust structure for geriatric care. “One of the painful transitions we’ll go through as knowledge work gets displaced [by artificial intelligence] is to rethink what we pay and how we pay for [jobs to] fix huge, broken social systems in America.”

At Harvard, Bodnick, who is studying government and wrote about the experiment in the publication Slow Boring, worries about the prevalence of cheating with the use of ChatGPT, but said she recognizes universities must learn to work with the technology.

“AI is just going to exist in the real world and you can’t prevent companies from using it to automate jobs,” Bodnick said. “It definitely makes you rethink what we’re doing in college. ... You really do want to be able to apply to a lot of things you’re learning so I do wonder if pedagogies are going to shift a lot.”


As Harvard prepares for the upcoming semester, Amanda Claybaugh, Harvard’s dean of undergraduate education, shared guidance with faculty on Wednesday regarding generative artificial intelligence in the classroom, including a reminder to adopt a classroom policy for work done by ChatGPT.

Adriana Gutiérrez, a senior preceptor at Harvard, said in an interview that she realized last semester the need for a clearly stated policy regarding student usage. Gutiérrez said she appreciates that the university is leaving it up to instructors to determine how to word the policy needed for each class. Gutiérrez plans to state in her syllabi that students “will be able to use language learning models for brainstorming, for inspiration, and even for outlines.”

Gutiérrez said students will need to give appropriate acknowledgment of how they used AI in their work, but will not be allowed to use it for editing, translating, or writing an essay. She added that in her small Spanish classes, she has the advantage of learning each student’s writing style, strengths and weaknesses, which makes it easier to detect when ChatGPT is used, especially when suddenly the student’s work uses “transitions and vocabulary that [they] could not possibly” know. Gutiérrez wants to emphasize to students that the process of writing is the most important part of her classes.

“It’s not just writing for the sake of writing,” Gutiérrez said. The assignments are meant to help students “organize their ideas and make connections on the topics that we discuss to making a really creative, interesting paper.”


Some professors are changing course assignments in response to ChatGPT’s emergence on campus. Economics professor Jason Furman tweeted in response to Bodnick’s experiment that he plans “to drop the essay [in Harvard’s introductory economics courses] this coming year, in part because ChatGPT has reduced the marginal net benefit that comes from the essay.”

In an interview, Furman said that he and his co-teacher brainstormed ways to revise the essay prompt, including having ChatGPT write an essay then asking students to write a critique of the essay. But Furman “asked ChatGPT to write an essay and write a critique and it did a great job of that as well.”

“In my course [the essay] was nice to have but not central to what we are trying to teach, so we decided the costs of it outweighed benefit,” Furman said.

Harvard said it will hold information sessions next month to help professors “AI-Proof” assignments.

“We can’t be sure yet what that impact will be, since these technologies are rapidly evolving, along with our understanding of the possibilities they afford and the challenges they pose,” Claybaugh wrote.

Hilary Burns can be reached at hilary.burns@globe.com. Follow her @Hilarysburns.