
S.A.F.E. by Design: Policy, Research, and Practice Recommendations for AI Companions in Education
The emergence of consumer-facing AI chatbots in education has created a kind of "shadow" learning environment, where students increasingly rely on general-purpose AI tools that lack instructional and privacy safeguards. AI chatbots acting as “companions” are rapidly entering education spaces without sufficient guidance, oversight, or parental involvement, oftentimes on school-provided devices (Gaines, 2025). In a survey by Common Sense Media, about one in three students indicated that conversations with AI companions were as (or more) satisfying than interactions with real-life friend and more than 70% of teens have used these tools (general-purpose AI chatbots and purposefully-built companions) at least once, with over half using them monthly (Robb & Mann, 2025). Most critically, the documented link between AI companion interactions and tragic instances of youth harm or suicide underscores the life-and-death stakes of deploying this technology without rigorous ethical and clinical oversight. Moreover, about a third (31%) of high school students surveyed reported engaging with AI systems for personal reasons (not for schoolwork) on a school-issued device or software (Laird et al., 2025). This statistic highlights a significant blurring of boundaries between institutional equipment and personal emotional use. While school-issued devices are intended for academic purposes, students are increasingly using them as gateways to the "social AI" market.
Convened by the EDSAFE AI Alliance, the SAFE AI Companions Task Force is a global workgroup of educators, technologists, policymakers, researchers, industry experts, and youth and civil rights advocates committed to promoting safe and effective use of AI companions in education, anchored in our SAFE Framework. Over the last four months, the task force has explored the use and impact of AI tools that present as friends, homework helpers, partners, or confidantes, by remembering prior interactions, engaging in ongoing, personal conversations, and encouraging repeated engagement that can lead to unhealthy attachment. Together we grappled with a rapidly eroding boundary between general-purpose technology and specialized purpose-built EdTech.
The distinction between a 'companion' and a 'tool' blurs when students utilize general-purpose LLMs as on-demand study partners. Therefore, this application EDSAFE guidance for safety and efficacy must be platform-agnostic, ensuring that wherever a student encounters AI—whether in a specialized app or a general chat interface—the interaction is grounded in safe, ethical, and educational best practices.
The SAFE AI Companions Task Force, convened by the EDSAFE AI Alliance, is focused on asking questions and quickly answering them through action-based research related to student use of AI companions on school-issued devices and in educational contexts. In this section, the task force outlines five research areas of opportunity, including: 1) mandated reporting and AI companions, 2) student privacy, 3) prosocial AI design and use, 4) learning sciences & effective pedagogy, and 5) AI benchmarking.
.png)