Updated: Jul 25
AI technology accompanies us through a majority of our daily interactions, providing utility, ease, relief, and even amusement. From the way that search results are presented to us, which comments appear on social media, what films are recommended to us on streaming services, what items are suggested to us when we purchase something online, and the bots that appear offering help or support on many websites. Usually, technology will only catch our attention when it doesn’t work as intended. Consider a dropped call or your bluetooth headphones not being able to connect, or data you have entered suddenly no longer appearing or not being saved. These relatively common technological failures are an annoyance but, for the most part, do not influence other systems or our future.
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. — Stephen Hawking
With the use of AI in education as yet being largely unregulated, undefined and not standardized, the potential exists for great risk, which, so long as the systems are functioning well, may not be visible, tangible or even perceivable to the users and administrators of these tools and services. These risks can include using AI technologies to harvest data for a company's own gain, holding student district data hostage with ransomware, or limiting access to resources and assessments through automatic profiling. Cyber attacks, data security breaches, or even hacking gradebooks and report cards are a different situation when adaptive software for malicious intent can be aimed at student information systems and assessment data warehouses. The potential for these lines to become even blurrier increases as next generation technologies become even more intelligent, precise, and take responsibility for automating prior tasks that were manual.
Such challenges require deliberate conversations with growing urgency, frameworks and guidelines both for development and implementation, and involvement from stakeholders at all levels of the education ecosystem in order to provide a safe and equitable learning environment and support educators in their teaching and administration processes.
Consider the following categories of risk:
Bias & Equity
Lack of Standardisation for EdTech Vendors
Thorough & Reasonable Regulation
Learning opportunities can (inadvertently) be limited by inherent or intentional biases in the development and modelling of AI tools so that students’ progression can be restricted further promoting bias against certain groups or learning types.
EdTech vendors bringing existing AI technologies into their product roadmap without updated standards can cause data interoperability issues, or errors in high-stakes student assessment and transcripts that impede progress. The current lack of AI standards means there are no checks for AI tools to go through in order to be compliant for specific learning scenarios.
Current regulatory practice displays a strong focus on restrictions. It can be just as important to ensure that newer technologies can also innovate and that decisions are made so that education profits from the benefits within regulated, safe and standardised structures. Also, regulation can provide accountability to such innovation aligning with improved teaching and learning outcomes.
Students not receiving reliable support for interacting with AI technology are not only vulnerable in myriad ways, but can contribute to the harm of others or be swayed by deliberate misinformation.
Benchmarks & Standards
With increasing numbers of AI-capable educational tools being introduced into the market, it is necessary to establish guidelines, benchmarks and standards to help both consumers discern the quality and reliability of these new technologies, and regulatory bodies navigate these new challenges. It will be imperative to ensure that the education ecosystem can rely on standards which keep learning and teaching safe and equitable. In order to accommodate for the complexity of AI in education, it is critical that benchmarks and standards encompass multiple layers and topics. The EdSAFE AI Alliance suggest the S.A.F.E criteria model:
Mitigate Risks, Promote Opportunity.
With the SAFE Benchmark Framework and the right participation from various stakeholder groups, The Alliance is optimistic the conversation will shift towards the gains knowing that risks or threats are kept at bay.
Join in the discussion, and become part pf the EdSAFE AI Alliance as we develop frameworks, benchmarks and standards to ensure the safe use of AI in education and equitable access to learning and teaching ecosystems!