
The Global Pledge
This pledge aims to bring these stakeholders together in an ongoing process to address key areas of urgent and emergent policy considerations globally.
A Global Network Supporting the Safe Use of AI in Education
In the face of global challenges such as large-scale migration, health events like the recent pandemic, drastic teacher shortages, and climate related access issues, AI technologies will be required to help bridge gaps and support continued and equitable access to meaningful learning environments.
The deployment and implementation of AI in education must, however, be done in a way that supports necessary innovations, addresses key issues and risks associated with the development and support of these tools, and also ensures appropriate governance and the adherence to defined principles and regulation processes.
Ensuring that the use of AI in education is safe, accountable, fair, equitable, and effective will require the purposeful gathering and consideration of input from diverse stakeholders to address these concerns and find a mutual path forward.
We pledge to advocate for the following, supporting the creation and implementation of related AI and education standards that can be adhered to globally:
1. Transparency and explainability regarding how AI systems learn or are trained
A mechanism to have confidence in how AI systems have learned and been trained. This can encompass a definition of what transparency and explainability must look like in relation to how AI systems are trained. This includes a broad enough display of the training sources to ensure training data is comparable, equitable, and appropriate to the intended purpose, as well as to identify and limit possible restrictions for specific cultural, racial or learner-type communities. Transparency mechanisms must continue to apply throughout the persistent training and learning processes and must be seen as an ongoing process.
2. Data usage
Transparency regarding how AI tools utilize or aim to utilize user data within their system, the intended usage for any data being gathered, and the specific desired outcomes for which the data will be used including external uses of data.
3. Informed consent
Data usage agreements must be in place. Any related consent processes, specifically for minors, must be made accessible and clear. The manner in which data is being incorporated within a system and the degree to which users are further training AI systems must be explained. There must be a commitment to disclose when users interact with an AI system.
4. Privacy
The inclusion in AI and Education policy discussions of privacy by design and, relatedly, a degree of influence over how and why personally identifiable data is being used. This is aligned with appropriate measures and processes to ensure a user’s right to rectification and any other appropriate measures under local governing laws.
5. Safety and Security
AI systems being reliable and doing what they are supposed to do without causing harm. The ability for systems to resist external threats and, therefore, have a resilience to vulnerabilities which could affect the integrity and confidentiality of personal data.
6. Bias and learning limitations
The definition of processes to identify, assess and mitigate bias or (unintentional) limitations for any specific user types or groups within AI education tools with a focus on inclusiveness by design, non-discriminatory practices, fairness and equity.
7. Accountability
A model for responsibility and accountability which covers both a core AI tool and any ecosystem of applications built upon it, and which can both test where breaches of legislation or agreed policy have occurred and show clear paths to remediation.
8. Human-in-the-loop
The necessity of maintaining human feedback loops when AI is used in education, overseeing key decisions and actions related to their specific learning environment and spanning both initial development and subsequent iterations of any AI model or component, to ensure its ethical and human-centered use within education and learning environments.
9. Verifiability
The ability to review transmission between the AI system and users (particularly minors) and the responses, including internal AI metrics such that humans can assess with confidence, and actuate a review and audit process with a path to providing schools, teachers, and parents with an oversight capability.
10. Stakeholder support
The establishment of processes through which educators, learners, and the broader education community are systematically supported to have a deeper understanding of the opportunities and challenges of AI in learning environments so they can be part of shaping the future. A commitment to being user-centered, collaborative, and consultative in the identification of education problems and the formulation of the solutions.
The Pledge
Call to Action
Pledge signers agree to support collaborative work to summarize key issues for the use of AI in education, to support the development of a roadmap of action, and to participate in urgent and emergent policy discussions to support the engagement of all relevant stakeholders including but not limited to learners and their communities, educators, government and industry.






