Last week, the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered AI convened educators, researchers, technologists, policy experts, and more for the fourth annual AI+Education Summit. The day featured keynotes and panel discussions on the challenges and opportunities facing schools, teachers, and students as AI transforms the learning experience.
At the summit, several themes emerged: AI has created an assessment crisis – student projects no longer indicate a strong learning process; schools are awash with too many AI products and need better evaluations and sustainable adoption models; AI’s benefits aren’t equitable; AI literacy is a non-negotiable; human connection is irreplaceable. Read a few of the highlights from the Feb. 11, 2026 event, and watch the full conference on YouTube.
AI’s Inequitable Impact
AI amplifies whatever educational foundation already exists, said Wendy Kopp, founder of Teach for All. In mission-driven schools with strong pedagogy, AI becomes a powerful tool for teachers and learners. But without a strong pedagogy and guidelines, the technology becomes a distraction.
Miriam Rivera of Ulu Ventures said a critical distinction emerges between consumption and creation of AI. In well-resourced schools, she said, students often learn to create with technology (3D printing, coding), while in less-resourced schools, students merely consume it.

"How do we make sure that it's our teachers and our educators, especially those working with the most marginalized students, who are at the forefront of driving how we utilize AI?" — Wendy Kopp, Teach for All founder
Dennis Wall, a Stanford School of Medicine professor and Accelerator faculty affiliate, illustrated one way this might look. His team is developing a gamified framework to support children struggling with social communication skills. His lab is co-designing these resources with the teachers, therapists, and parents who will use them, ensuring the tools are accessible, engaging, and informed.
AI Literacy Is a Must-Have
Education has long assumed that strong products (homework, summative tests, problem sets) indicate strong learning processes, said Mehran Sahami, a Stanford School of Engineering professor. AI has broken this assumption. Students can now generate impressive products without engaging in meaningful learning. This directs educators to focus on assessing and supporting the actual learning process rather than just evaluating end products.
"Our challenge with generative AI is not to consider it as a tool, but consider it as a topic, and build a curriculum in which we teach how to use it to foster things like creativity and deeper educational outcomes." — Mehran Sahami, School of Engineering professor
More so, we can’t treat AI solely as a tool. Students need a systemic curriculum on AI. Sahami proposed a progression: Introduce what AI is; teach about hallucinations and bias; show how to verify AI outputs; teach advanced techniques like prompting. Without this structured approach, students teach themselves – and 70-80% use AI to short-circuit learning rather than enhance it.
Mike Taubman, a teacher at North Star Academy in Newark, N.J., developed an “AI driver’s license” curriculum that maps the adolescent rite of passage of getting a driver’s license onto AI literacy. The goal is to put students in the driver’s seat, not the passenger seat, when it comes to AI. The four-part curriculum includes choosing a destination (students learn to ask want they want of AI); learning how to drive (see how these tools work and what it means to prompt, develop agentic workflows, etc.); opening the hood (understand their limitations and risks); and defining the rules of the road (decide what AI should and shouldn’t do).
Understand AI’s Learning Harms
Guilherme Lichand, assistant professor at Stanford Graduate School of Education and faculty affiliate of the Accelerator, studied AI’s impact on creativity for middle school students in Brazil. He compared AI assistance with guardrails (if students ask for 10 words, the AI would give only 3) against no assistance across creativity tasks.
Students with AI assistance performed better on the task while they had the tool. But when assistance was removed within the same test, the advantage disappeared – suggesting no immediate positive transfer.
While that finding isn’t surprising, he said, the results from a follow-up creative task were more concerning:
-
Students who never had AI performed best.
-
Students with continued AI or new AI access performed slightly worse (not statistically significant).
-
Students who lost AI access after having it performed dramatically worse – four times worse than their initial advantage.
This wasn’t just about missing the tool – students had less fun and began believing AI was more creative than they were, he said, suggesting AI damaged their creative self-concept.
A “Too Many Pilots” Problem
Today we have no shortage of AI products, said Stanford Graduate School of Business Professor and Accelerator Faculty Affiliate Susan Athey, but we lack effective implementation and adoption. Schools and districts are slow to adopt new tools because of historical software lock-in and the opportunity costs of training teachers on systems that may fail.
"The bottleneck is we have too many pilots actually, and still not enough implementations that are actually effective." — Susan Athey, Stanford Graduate School of Business professor and Accelerator faculty affiliate
Athey also noted a “teaching to the test” problem for developers. If teachers spend more time on an interface, does that mean it’s good and they’re deeply engaged, or does that mean it’s terrible and they’re spending time trying to make it work? Education tools need multifaceted measurement approaches: human review, AI “guinea pigs” (simulated students to test products before real children do), and careful evaluation of what’s actually being measured. She advocated for digital public goods like evaluation tools, testing frameworks, and validated AI student simulations that could be developed by universities and philanthropy to create robust measurement infrastructure the whole sector can use.
Never Replace Real Relationships
Nearly half of all generative AI users are under 25, said Amanda Bickerstaff, CEO of AI for Education; that’s over 300 million active monthly users of ChatGPT alone who are under 25. Students use AI more for mental health and well-being – seeking connection, support, and understanding – than for schoolwork. Bickerstaff warned about cognitive offloading, mental health offloading, and even “belief off-loading,” where AI fundamentally shapes how people think, with just four or five chatbot makers having outsized influence on billions of users.
Because of that, she said, we must equip people with knowledge, skills, and mindsets to understand when and how to use AI and, crucially, when not to use it.
The most vulnerable, according to new research from Pilyoung Kim, a Stanford professor of psychology and director of the Center for Brain, AI, and Child (BAIC), are young people lacking human connections. She asked over 260 middle school students and their parents to compare and share preferences between two chatbot conversation styles: A “best friend” that was highly relational and would respond with comments like, “That must be so upsetting. Your ideas matter so much. I’m always here to listen,” and a more transparent version that set boundaries and reminded the user that it was an AI.
"If they have more unmet social needs, it is possible that they’re more drawn to an AI that provides social connections. That might put them in more vulnerable positions to overly rely on a relationship that is not real." — Pilyoung Kim, a Stanford professor of psychology and the director of the Center for Brain, Artificial Intelligence, and Child (BAIC)
More adolescents preferred the relational AI, and even more than half of parents chose the relational AI for their teens, reasoning it would be more effective at supporting issues their children might not share with them directly. But more importantly, children who chose the relational AI were also more likely to report feeling stressed or anxious, and they reported a lower family relationship quality.
“If they have more unmet social needs, it is possible that they’re more drawn to an AI that provides social connections,” Kim said. “That might put them in more vulnerable positions to overly rely on a relationship that is not real.”
She emphasized the common thread of the day: AI should never replace human connection.
The AI+Education Summit is co-hosted by the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered Artificial Intelligence (HAI). This story was originally published on Stanford HAI's website.