scorecardresearch Skip to main content

Will AI be monitoring kids in their classrooms?

Students and teachers are experimenting with tutoring chatbots, but it’s much harder to evaluate more opaque aspects of the technology.

Students in a sixth-grade math class at First Avenue Elementary School in Newark, N.J., have been among the early testers of Khanmigo, a new AI-assisted tutoring bot.GABRIELA BHASKAR/NYT

With the current panic over how students are going to abuse AI, there has been far too little thought paid to the ways that AI might harm them — including by surveilling them.

While schools are investing millions in AI detection software to try to figure out who is using AI tools and who isn’t, a quieter movement is moving chatbots into the classroom, and the implications might be even more troubling.

The highest-profile example of this comes from Khan Academy, which has grown from a repository of free instructional videos on YouTube to one of the largest online educators for K-12 students. Now, in a portent of what is likely to come in K-12 education, Khan Academy is offering a tutor bot called Khanmigo. It was first piloted at the private Khan Lab School in Palo Alto, Calif., but it is slated for expansion; it’s being tested in public schools in Newark, N.J. Sal Khan predicts that chatbots will give “every student in the United States . . . a world-class personal tutor.”

But there is a big difference between a human tutor and an AI assistant. And the stakes for children are particularly high. For one thing, educators are already raising the alarm that the technology might get things wrong or that it might just give students the right answer in a way that undermines the learning process. When The Washington Post interviewed Khanmigo’s Harriet Tubman character, the bot’s wooden recitation of Wikipedia-like facts was interspersed with quotes that are frequently misattributed to Tubman. It couldn’t go beyond a very narrow focus on the Underground Railroad, and the bot shut down when asked to comment on topics like reparations, no matter how crucial the idea was in Tubman’s time.


But while it’s easy for reporters, students, and teachers to test the limitations of the chatbot’s responses, it’s much harder to evaluate more opaque aspects of the technology.


The chatbot includes what Khan describes as “guardrails” — tools to monitor students for signs of self-harm. As a spokesperson for Khan Academy told us: “Our primary aim is to provide students with academic support. If, in the course of doing that, a student reveals something about harming themselves or others, we want to be able to flag that for the adults in their life.”

But if it’s questionable whether chatbots are giving students accurate information, why would we believe similar technology could be an accurate assessor of students’ mental health? The Khan Academy spokesperson didn’t specify the methodology used to identify such risk, but past tools based on language analysis don’t have a good track record.

Prior research by Education Week found that the school surveillance system Gaggle would routinely flag students simply for using the word “gay” in an email or file. Other students were flagged for sarcasm, as keyword searches didn’t differentiate when students saying “kill yourself” were joking around or making a real threat.

AI chatbots may be more sophisticated than simple keyword searches, but they can break in more sophisticated ways. AI tools reflect biases both in the data they are trained on and the design decisions humans make in their construction. These are the same sorts of biases that have led to facial recognition algorithms that can be 100 times more error-prone for Black women than white men. Mental health surveillance AI, like what’s probably being incorporated into the Khan Academy software, is even more likely to fail because it’s trying to predict the future rather than, say, simply recognizing a face. For example, if a child says “this assignment is going to kill me” in a chat, an expression of frustration may be misinterpreted by the software as a plan to commit self-harm.


If the “guardrails” on an app like Khanmigo get it wrong, students might face police investigation, a psychological intervention, or worse. And for neurodivergent students who already face countless forms of human bias, a system like Khanmigo, trained on a data set of supposedly “normal” and “at risk” students, may treat their differences as dangers. Even worse, for those who are falsely flagged as a threat to themselves or others, there’s no way to prove a negative — no way to prove that they weren’t a threat.

The spokesperson for Khan Academy defended the AI project, claiming that the “large language model we use is strongly constrained to suit our pedagogical and safety principles” and that the organization is “keenly aware of the risks.” The spokesperson acknowledged that “AI makes mistakes” but claimed that “users have reported factual errors like hallucinations or incorrect arithmetic in less than 2 percent of interactions.” The representative also said Khan Academy agrees that tutoring software should not provide students with answers, so it is “addressing instances when Khanmigo erroneously does provide the answer.”


Khan Academy’s goals — supporting students and combating self-harm — are laudable, but the risks of rolling out bad AI are just too great. Until schools can be sure that these systems are effective and nondiscriminatory, everyone should press pause.

Albert Fox Cahn is the founder and executive director of the Surveillance Technology Oversight Project, or S.T.O.P., a New York-based civil rights and privacy group; a Technology and Human Rights fellow at the Harvard Kennedy School’s Carr Center; and a visiting fellow at Yale Law School’s Information Society Project. Shruthi Sriram, an undergraduate at Boston College, is an advocacy intern at S.T.O.P.