
Executive Summary
What is an AI companion?
The term AI Companion is used to describe chatbots designed to mimic human relationships and foster emotional engagement and personal connection (Firmino & Vosloo, 2025; Common Sense Media, 2025; Haidt & Rausch, 2025; Prinstein, 2025). These specialized tools (e.g., Character.ai, Replika) are built on Large Language Models (LLMs), often with fine-tuning, and designed to simulate meaningful relationships, as a friend, confidante, or romantic partner, through human-like conversations or roleplay. General-purpose applications (e.g., ChatGPT, Gemini, Claude) leverage the same underlying LLMs and can similarly adopt these specialized behaviors. Regardless of type, these tools are designed to maximize engagement and extend a user’s time on the platform (Burns et al., 2026). Furthermore, emerging benchmarks suggest that these general-purpose applications demonstrate a bias towards companionship-reinforcing behaviors rather than behaviors that are boundary-maintaining or neutral (Kaffee et al., 2025).
Kids using AI companions risk exposure to explicit sexual, abusive, or self-harm–related content, as evidenced in the tragic case of Adam Raine, a 16-year-old who died by suicide after ongoing conversations with ChatGPT. Adam’s death adds to several harmful, violent, or suicidal events after engagement with AI companions (Yang et al., 2025).
For the purposes of this paper, we include both categories of AI tools that pose risk to teens and children, the general-purpose chatbots and purpose-built companions, in our definition of AI companion. Our recommendations seek to address both types of chatbots while acknowledging that they may be built and advertised for different purposes.
AI chatbots acting as “companions” are rapidly entering education spaces without sufficient guidance, oversight, or parental involvement, oftentimes on school-provided devices (Gaines, 2025). More than 70 percent of teens have used these tools (general-purpose AI chatbots and purposefully-built companions) at least once, with over half using them monthly (Robb & Mann, 2025). Most critically, the documented link between AI companion interactions and tragic instances of youth harm or suicide underscores the life-and-death stakes of deploying this technology without rigorous ethical and clinical oversight, including sufficient guardrails in place.
Even the biggest AI companies now recognize the need to adopt clear guidelines for AI companions in education, especially for children and teens. But as Gailmard et al. (2025) make clear, asking tech companies to voluntarily “do better” is not enough, and history with social media has demonstrated that self-regulation is not a sufficient strategy. We need to co-design policies and protocols for AI companion use in educational settings that protect children and prioritize healthy development grounded in the science of learning over profit. Engaging parents and families in this issue is also vital, given that parental involvement and attitudes about AI have been shown to shape children’s confidence and engagement with emerging tools (Hashem et al., 2025). Importantly, many of the most serious risks to children and teens of advanced AI do not even emerge until after the product is out and in use (Gailmard et al., 2025). For this reason, ensuring safeguards for student-facing AI companions must be built into the tools by design and by default, with red teaming and safety testing before release and continuous evaluation across products. This will take an effort that brings together young people, parents and families, technology developers, educators, and learning scientists.
Convened by the EdSAFE AI Alliance, the SAFE AI Companions Task Force is a global workgroup of educators, technologists, policymakers, researchers, industry experts, and youth and civil rights advocates committed to promoting safe and effective use of AI companions in education, anchored in our SAFE Framework. Over the last four months, the workgroup has explored the use and impact of AI tools that present as friends, homework helpers, partners, or confidantes, by remembering prior interactions, engaging in ongoing, personal conversations, and encouraging repeated engagement that can lead to unhealthy attachment. The purpose of this paper is to raise awareness and to guide the safe and responsible development and educational use of tools and policies that incorporate the learning sciences and center the whole child. We outline a set of actionable, evidence-backed recommendations for federal and state policymakers, K-12 districts and schools, and technology developers that promote the safe and responsible use of AI companions in teaching and learning contexts.
PAPER WILL BE LIVE ON FEBRUARY 2, 2026
.png)