The message from the White House—and, often, from tech companies and public schools—is that Figure 03 and its A.I. militia are irreversibly here, and belong everywhere, and we should feel terrified but also “empowered,” and that the more time and resources we hand over to them the less they will hurt us, hopefully, maybe. Last month, New York City’s Department of Education began soliciting public feedback on its preliminary guidelines for using A.I. in K-12 classrooms, which include this admonishment: “The question is not whether AI belongs in schools. The question is whether we will collectively build a system that governs AI to serve every student and every stakeholder.”
It’s quite the rhetorical suplex—opening a debate by declaring its central premise off limits. But, as we know from hallucinating chatbots, saying something doesn’t make it so. Countless studies have sown doubt about the place of A.I. in pedagogical settings. “The integration of LLMs into learning environments,” a 2025 study out of M.I.T. cautioned, “may inadvertently contribute to cognitive atrophy.” (The authors appended an F.A.Q. to the paper with instructions on how to discuss its findings: “Please do not use the words like ‘stupid’, ‘dumb’, ‘brain rot’, ‘harm’, ‘damage’, ‘brain damage’, ‘passivity’, ‘trimming’ and so on.”)
More recently, Education Week published findings from an analysis of data from some thirteen hundred U.S. school districts, which found that about one in five student interactions with generative A.I. “involved cheating, self-harm, bullying, and other problematic behaviors.” This month, a study by researchers from M.I.T., Carnegie Mellon, U.C.L.A., and the University of Oxford showed that people who used L.L.M.s on fraction-solving math problems and then lost access to A.I. assistance “perform significantly worse without AI and are more likely to give up. . . . These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning.” (This research has not yet been peer-reviewed or published in a scientific journal.) And, at the start of the year, the Brookings Institution released a “premortem on AI and children’s education,” which paired analysis of about four hundred research studies with hundreds of interviews with students, parents, educators, and technologists, and concluded that A.I. tools “undermine children’s foundational development.”
The main arguments against the use of generative A.I. in children’s education are threefold. The first is that L.L.M.s encourage cognitive offloading before kids have done much cognitive onloading—that is, if these tools cause atrophy of thought in adults, then we can scarcely overestimate the potential effects on a brain that has not developed those cognitive muscles in the first place.
The second is that chatbots, which mimic emotional intimacy and tend toward sycophancy, warp how children forge their selfhood and relationships. Around age ten or eleven, kids are “suddenly developing more sophisticated relationships and social hierarchies,” Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, told me. “A lot of that can be traced back to surging oxytocin and dopamine receptors. Oxytocin makes us want to bond with peers, and dopamine makes it feel good when we get positive feedback.” When a fawning L.L.M. enters the chat, “it’s hijacking the biological tendency to want peer feedback,” Prinstein said. Tweens do a lot of mutual emotional disclosure in the normal course of growing up, he went on, “but if they’re going to a chatbot, they miss out on practicing skills that we use for the rest of our lives.”
The third complaint against the use of A.I. in schools is that it confuses ends and means, privileging the most efficient route to the correct answer, the crispest thesis statement, or the neatest drawing over the messier and less quantifiable process of building a thinking, feeling person. “We are potentially undermining complex thinking, changing the development of sociality, and mistaking the learning goal,” Mary Helen Immordino-Yang, who is a professor of education, psychology, and neuroscience at University of Southern California, told me. “We are cutting off learning at the knees.”
Even some pro-A.I. education advocates concede that A.I. poses significant cognitive and social-emotional risks to young people. Amanda Bickerstaff is the co-founder and C.E.O. of the organization AI for Education, which provides training for educators and students on generative A.I. literacy. “Children should not be using chatbots under age ten,” Bickerstaff told me. “These tools require expertise and evaluation skills that even many adults don’t have.” Google’s decision to make Gemini available to all ages, she said, marked one of the few times in her career that she has lost sleep over a work-related matter; she recalled thinking, “They so clearly know that this is going to be bad for kids, and yet they’re still going to do it.” Bickerstaff went on, “I don’t think they’re asking really basic questions like, ‘If a kid can immediately make a picture instead of draw one, what will happen to that kid’s ability to think on their own and draw?’ ”
