EdSAFE Leadership
Where do we go from here?
EdSAFE's response to the OET report on AI and the Future of Teaching & Learning

At the EdSAFE AI Alliance (EdSAFE), we continue sharing updates from the broader landscape in alignment with our mission to foster dialogue and collaboration across the diverse network of organizations and stakeholders that are involved in the rapidly evolving landscape of artificial intelligence and other emerging technologies.
One of our member organizations, Digital Promise, under the leadership of Jeremy Roschelle, has produced an extensive report, entitled Artificial Intelligence and the Future of Teaching and Learning for the Office of Educational Technology (OET). Bernadette Adams, a Senior Policy Advisor in the OET, is also a part of our advisory council, and discussions within the EdSAFE AI Alliance have benefited greatly from the insights and observations that she and her colleagues have shared as part of OET’s ongoing efforts to promote the safe, effective, and fair use of technology to improve learning.
EdSAFE’s leadership participated in listening sessions and contributed ideas about the scope and focus of the report. We are recognized in the document as a leading authority for helping to inform and shape policy regarding the use of AI in education, both in the immediate term and as we consider how things will continue to evolve in the future.
We are pleased to see that the report echoes the conversations our members have had about our shared values and purpose, which are grounded in the importance of things like safety, accountability, fairness, equity and efficacy. Recent developments and shifts in the AI landscape have reinforced why we started this organization and why we value collaboration across our diverse network of participating organizations. We recognize that our diversity is a strength and that working together promotes a deeper understanding of what we can do to create a future where technology can be leveraged for the greater good.
The OET report issues seven key recommendations, each of which reflects careful deliberation about the current state of AI and its real world application to challenges in teaching and learning. EdSAFE supports these recommendations and we pledge the engagement of our organization and membership in the important follow-up conversations that are ahead of us:
Humans-in-the-loop must be a topmost priority
Align AI models to a shared vision for education
Design using modern learning principles
Prioritize the important work of strengthening trust
Inform and involve educators
Focus research and development on addressing context and enhancing trust and safety
Develop education-specific guidelines and guardrails
In the paragraphs that follow we provide some additional thoughts about each of the seven recommendations above.
Humans-in-the-loop: Machines and automated processes are better suited for consistent, repeatable processes than they are for situations that are unique in their complexity or that rely on nuanced understandings of social and emotional context. Therefore, we believe it is more appropriate to focus on AI as a component of potentially powerful and useful tools that can augment and enhance the work of well-trained and supportive education professionals rather than as a substitute for people. Each student presents a unique set of intellectual, physical, psychological, emotional, social, cultural, motivational, and other factors associated with an equally complex context for their lives. A person and the current or temporary context for their lives are not reducible to a predetermined set of imperfectly-specified variables in even the most sophisticated algorithm. Because decisions about motivation, engagement, and access to learning experiences and support are so consequential, it is important to ensure that humans remain in the decision-making loop and are able to bring additional dimensions of human understanding and judgment to bear.
Align AI models to a shared vision for education: As the report states, “Models are approximations of reality.” As such, we must keep in mind that all AI models reflect both the implicit and explicit assumptions and ideas about learners and learning held by their creators. Sometimes these ideas are stated explicitly, but often (at least as of today) they are not, and understanding algorithmic bias in AI tools is absolutely critical in the field of education. We also believe that it will also become increasingly important for educators to know what particular AI models are optimized for – and how compute resources are allocated to support that optimization – as that may be one of the major sources of potential misalignment of goals or purpose. We encourage all those with an interest in leveraging technology to improve learning to read this section of the report with great care, and to engage in sustained reflection and dialogue with people outside of their organizations and areas of professional expertise. No single field, or discipline, or profession, or organization, will have all of the knowledge or insights or capacity to design, test, and build – on their own – the tools that our teachers and learners need.
Design using modern learning principles: Many learning tools and systems are described as being “informed by” insights from the field of learning sciences, though even a cursory review will reveal that they are optimized for speed or efficiency in attaining a certain number of correct answers in a fixed period of time rather than ensuring that students achieve a deeper understanding and have a greater likelihood of recalling the ideas or knowledge when they need it at some future time. In addition, our increasing understanding of learner variability and variability and impact of context on learning will require us to have a more nuanced understanding of what tools and strategies might be more helpful for a particular student in a particular context. We need to get comfortable with the idea that we are closer to the beginning of this journey than we are to its middle, and that we will be more likely to reach our desired goals if we take the time now to ground our work in a shared understanding of learning principles.
Prioritize the important work of strengthening trust. In the very first EdSAFE meeting one of our members remarked that “progress will be made at the speed of trust” and another added that he regarded the assembled participants and organizations as “a circle of trust.” Earning and retaining public trust will require honesty, transparency, responsibility, and accountability in both the creation and use of AI in edtech tools and systems. And that will require that we develop clear expectations for things like governance and security of data, privacy and the permissible use of data, the explainability and inspectability of algorithms, reliable processes for the rectification of errors, among other matters.
Inform and involve educators. The line between the haves and have nots in our society runs right down the hallways and classrooms of our schools and colleges, and teachers are often the ones who can change the trajectory of a student’s engagement, motivation, and changes of success. As a friend of mine states with great passion, “if you want to help a student or community, help a teacher.” Our best teachers see and respond to students as whole people and as another friend of mine has said, “anyone who has achieved any type of success in life can remember the teacher or teachers who believed in them.” Teachers want and deserve a role in defining the priority challenges for which edtech innovators can work to create solutions, and teachers deserve a seat at the table in the design process to ensure usability in real world teaching and learning environments, which might include constraints in access to technology and/or high speed internet at home or at school, or consideration of accessibility and language needs of diverse learners. To achieve positive impact and true success, educators and edtech must relate to one another more like two wings on one bird than as foxes and chickens warily eyeing one another in a hen house. Well-designed AI tools should elevate teachers and teaching rather than relegate teachers to roles in monitoring the use of technology by students.
Focus research and development on addressing context and enhancing trust and safety: as stated earlier, we are closer to the beginning of this evolving story than we are to the middle. As with any type of innovation, there will be important lessons to learn and major adjustments to be made as we go forward. And that means that we should design in the capacity for good research from the beginning – both to enable us to focus attention on key priorities, such as the “long tail” of learner variability noted in the report – and accelerate adoption for those tools and strategies that prove themselves in efficacy studies with a representative sample of students and schools. This will require new, trust-based agreements among collaborators to ensure privacy and data security, as well as comparability to understand the relationship between efficacy, learner variability and context. We should keep in mind that this work will be as complicated as it is important, but that it is worth doing well because it increases the likelihood of positive impacts on teachers, learners, and public trust.
Develop education-specific guidelines and guardrails: In recent months I’ve spoken with a number of people in the edtech field who have remarked that they were surprised to learn how complicated the school/teaching and learning space really is. That realization usually brings a smile to my face as I used to hear similar things from college trustees about the complicated nature of some of the situations that required my attention as a college student affairs dean. I’ve come to think that because we have all had the experiences of being students that we think we know what it might take to be an effective teacher or school leader. But in truth our observations are limited to our own lived experiences and maybe some insights shared with us by peers or maybe an occasional teacher. At the same time, for the many of us who have flown on a plane, I wonder how many of us would feel qualified to stick our heads into the cockpit to offer suggestions to the flight crew. Education is both complex in having many components, and complicated in requiring a lot of effort to improve. It is, therefore, in need of guidelines and guardrails that are crafted to its unique needs and challenges, especially when we are trying to figure out how to incorporate a transformational new technology into real-time activities where we have zero tolerance for potential harms to the individual students who have been entrusted to us by their families and communities. Move fast and break things is not an option when so much is at stake, and where a bad experience can snowball and compound into long-term negative impacts for students.
Some closing thoughts:
We want to publicly thank Digital Promise and the Office of Educational Technology for providing such a comprehensive and insightful report. The report provides a common text from which education leaders, lawmakers, and developers can draw enlightenment, and a shared language and framework for thinking about contemporary challenges, and potential new opportunities.
In recent months we all experienced a paradigm shift as large tech companies released versions of their large language models (LLMs) into the mainstream. Much like the impact of the pandemic’s before, teachers and school leaders were forced to quickly adapt and react to these releases, seeing the imminent threat to assessments that were suddenly made vulnerable by these emergent technologies.
With the start of a new school year just months away, superintendents, curriculum directors, and chief technology officers are already working to reshape their local policies, and we are encouraged to witness these school leaders considering the potential these innovations bring to the day-to-day work of teachers at the chalkface—though much work remains to ensure that we can protect all learners and teachers from another form of data exploitation and manipulation.
In addition to the tenets recently expressed in the EdSAFE Global Pledge, we are actively developing an international policy framework with the input of our advisory council and fellowship participants to encourage the inclusion of SAFE Benchmarks like those we published in 2022.
With educators, researchers, innovators, and policy experts working together, we can continue to believe that technology innovations can have a safe and positive impact on teaching and learning. To do so will require that we create the right balance in the relationship between policy, product, and purpose so that the entire field benefits. And to do so, we invite your involvement so that your voice is likewise represented and heard.