Future Proofing Human Flourishing Task Force
The proliferation of generative artificial intelligence marks a seismic shift in the human experience. We recognize AI as an arrival technology, defined by its ability to disrupt existing systems and infrastructure. It does not simply ‘pass through’ the economy as a tool but instead ‘arrives’ as a permanent, foundational layer of human infrastructure that shapes the way we work, learn, live, and even relate to each other.
While the public conversation often focuses on administrative efficiency and automation, the Future Proofing Human Flourishing Task Force is dedicated to a more profound North Star—we believe that educational AI must be intentionally designed to promote human flourishing, or the optimal state of physical, mental, social, and spiritual health, where an individual possesses the agency to lead a purposeful, engaged, and self-directed life. As AI becomes more ubiquitous in our lives, it has never been more imperative to ensure that these tools support learning and development rather than cognitive atrophy and offloading.
Our goal is to develop a practical pedagogical yardstick (benchmark) for evaluating generative AI tools in education for their alignment with the Learning Sciences and the Science of Learning and Development (SoLD), evidence-based practices that incorporate neuroscience, psychology, and related fields to ensure that all students can grow and thrive, with brain development nurtured by relationships, environments, and lived experiences.


Future-Proofing Flourishing: The Convergence of AI, Education, and Industry Convening
In February 2026, the EDSAFE AI Alliance and the Mary Lou Fulton College for Teaching and Learning Innovation co-organized a high-impact convening of leaders in Tempe, AZ, at Arizona State University. This gathering broke down traditional siloes by bringing together 70 leading researchers, advocates, practitioners, and innovators across:
-
K-12 Education
-
Learning Science Researchers
-
Workforce Development
-
National Security
The insights from this convening form the foundation for the Task Force’s work, contributing to a White Paper, Engagement Strategy, and the development of the Learning Sciences Benchmark.
The Learning Sciences Benchmark
The centerpiece of this work is the development of a pedagogical yardstick to evaluate generative AI tools for alignment with the Learning Sciences and the Science of Learning and Development (SoLD). Unlike benchmarks currently used by industry and capital markets that define model success by accuracy, efficiency, and task replacement, this benchmark will evaluate tools based on their impact on human cognition, learning, and agency. The benchmark will be built according to three core pillars:
-
Cognitive Foundations: Assessing the tool’s impact on human agency, executive functions, and social cognition to ensure it strengthens human relationships and cognitive development.
-
Pedagogical Design: Evaluating the engineering of productive struggle and the tool’s ability to foster human-to-human interactions and workforce preparation.
-
Measurement Accountability: Establishing automatic detection of learning moments and longitudinal tracking to verify that the tools deliver on the promise of deep learning.
Key Deliverables
The Task Force is committed to creating a suite of resources designed to move the sector from using AI in education for task replacement to supporting cognitive development and learning:
This document will build upon the Blueprint for Action: Comprehensive AI Literacy for All and provide a clear framework for anchoring AI design, development, and efficacy evaluations in the Learning Sciences, with concrete recommendations for the education, workforce development, and national security sectors.
A multi-channel plan to drive adoption among state superintendents, procurement officers, federal agencies, industry leaders, and global partners.
An independent, third-party, open-source testing framework that evaluates generative AI models not on their factual knowledge, but on their pedagogical capability. This initiative seeks to operationalize the Learning Sciences into automated code, ensuring AI tools in education are effective for learners, not just factually correct.
Advisory Board
To support this work, we have put together an Advisory Board of international experts across K-12 education, learning science researchers, workforce development, and national Security. They ensure that our work remains rooted in the learning sciences and equitably represents each sector’s input, playing a critical role in shaping the White Paper, Engagement Strategy, and Learning Sciences Benchmark.
The Advisory Board includes:
-
Adam Ingle, LEGO Group
-
Anchal Nagdev, PhD Student
-
Anneke Buffone, CLARA
-
Ben Pring
-
Bethany Little, EducationCounsel
-
Brent Parton, CareerWise
-
Bridget Burns, University Innovation Alliance
-
Brittany Stich
-
Carole Basile, Mary Lou Fulton College for Teaching and Learning Innovation
-
Chonghao Fu, Leading Educators
-
Cristine Legare, University of Texas at Austin
-
Damion Mannings, MIT
-
Deborah Quazzo, GSV Ventures
-
Devansh Tank, PhD Student
-
Ellen Dollarhide McCoy, Ronald Reagan Presidential Foundation and Institute
-
Emily Marshall, Pima Community College
-
Erin Schulte, Arizona State University
-
Emma Nothmann, Bridgespan
-
Hari Subramonyam, Stanford University
-
Horatio Blackman, Education Reform Now
-
Janel White Taylor, Mary Lou Fulton College for Teaching and Learning Innovation
-
Janice Mak, Mary Lou Fulton College for Teaching and Learning Innovation
-
Joaquin Tomayo, Mary Lou Fulton College for Teaching and Learning Innovation
-
Kaitlin Tiches, Digital Wellness Lab
-
Karen Pittman, KP Catalysts
-
Merita Irby, KP Catalysts
-
Kelly Shiohira, App Inventor Foundation
-
Kim Smith, LearnerStudio
-
Kofi Wood, PhD Student
-
Leigh Ann Delyser, SRI
-
Lindsey McCaleb, Arizona State University
-
Lisa Dawley, Jacobs Institute for Innovation in Education
-
Lynne E. Parker, University of Tennessee, Knoxville
-
Mary Wells, Bellwether
-
Mathilde Cerioli, everyone.AI
-
Matt Gee, Gates Foundation
-
Matt Rascoff, Stanford University
-
Michael Strambler, Yale School of Medicine
-
Michelle Watt, Northern Arizona University
-
Miriam Schneider, Google DeepMind
-
Nishant Shah
-
Paul Lekas, SIIA
-
Peggy Yin, Stanford Institute for Human-Centered Artificial Intelligence
-
Philip Steigman, McCourt School of Public Policy
-
Punya Mishra, Mary Lou Fulton College for Teaching and Learning Innovation
-
Rose Luckin, University College London
-
Roy Pea, Stanford University
-
Ryan Baker, Adelaide University
-
Stephanie Wu, City Year
-
Szymon Machajewski, University of Illinois at Chicago
-
Tammy Wincup, Securly
-
Tara Menghini, Chandler Unified School District
.png)
