Eric Nentrup
AI is here. More is coming. And we have to get used to it.
Updated: Apr 6
Republished with permission from EdTechDigest.com

Image courtesy of Pixabay
The volatility for AI in big technology news is continually brimming if not boiling over anymore. For all that is difficult or too technical to comprehend, the need to adjust our disposition isn’t. Large Language Models (LLM) in particular have found themselves in the mainstream since OpenAI released the ChatGPT interface to allow anyone to interact with an AI in a very human-like, conversational manner. For those who have experimented, it’s akin to acquiring your first droid, a C-3P0 without the golden chrome plating or Anthony Daniels’s voice.
Billions of dollars have been invested to get us to this tipping point with these LLM. Then within a span of four months, GPT-3 (the backend for ChatGPT) gave way to GPT-4 which has already passed a simulated bar exam, while Stanford unveiled how they’ve been grooming Meta’s LLaMa towards reproducing itself, cheaply. They named its offspring “Alpaca 7B”. The effort cost $600. Within a week, Stanford pulled the plug on Alpaca citing safety concerns, but that presumes doing so stops it from continuing. But the Alpaca is already out of the barn and barnyard.
On the same day as GPT-4’s release, Sal Khan posted his announcement that the Khan Academy was including GPT-4 in their platform, and it indicated something unique to the field. Khan is as much of a celebrity in our field as one can imagine and his work is praiseworthy for making education accessible to anyone with interest and internet connection. His explanation for how GPT-4 was under the hood of their new personal education assistant, “Khanmigo” indicates that the Khan Academy team has been anticipating this release for some time and perhaps working closely with OpenAI to prepare for such a moment.
Since the announcement, scrutiny has emerged, which is a good example for what we need to see in this space—asking edtech leaders we’ve entrusted with our students and edtech budgets the right questions for how they’re building their products ethically and responsibly. We deserve transparency to protect the trust.
Bill Gates recently posted a memorandum that “The Age of AI has begun”. In that piece Gates offers insights how AI will impact all of our lives from across his career in innovating technology and investing in the innovation of others, especially in service to improving the profession and outcomes for global health and education. He envisions and prioritizes productivity gains and personalized learning with AI-enabled edtech, while acknowledging prior failures to deliver a return on investments in edtech.
On a recent podcast, edtech expert and writer, Tony Wan suggested a way to use ChatGPT to vet your instructional design before releasing it to students so that you can iterate and make your writing prompts or other factors more focused on critical thinking and less about gathering facts off of the Internet.
What’s yet to be addressed is the need for new dispositions for quickly adapting to these innovations and the expectation for AI to infiltrate every metric in our field. As educators, we are purveyors of change in service to student growth, both academic and non-academic. Thus we lead by example, especially when a paradigm shift occurs as we’re witnessing. More on navigating paradigm shifts in a moment, but first, let’s focus on the needed dispositional shift as educators to make what follows more accessible.
When we automate the mundane, we liberate the people.
I first heard a variant of that phrase from a local IT company leader that hosted our high school student interns interested in a career in computer science. It has stayed with me throughout my career working in and with edtech, yet now it holds new meaning, because there is a canyon of difference between a washing machine and a washboard relative to the keyboard + Google Docs combo I’m using at the moment, and an intelligent assistant that can generate 750 words of cogent text in seconds.
By now, we’ve all likely encountered a story where the author or producer confesses that the content you just read or listened to was authored by ChatGPT. Or countless other examples where ChatGPT coupled with a voice synthesizer can manufacture the illusion that you’re hearing from the late Steve Jobs.
Uncanny and jarring as such examples can be, there are many others that demonstrate utility. Have a look at what one Redditor posted recently. This list is demonstrative of how “humans in the loop” can innovatively take these sophisticated tools and yield an unfathomably diverse array of outcomes and responses. Such innovation—if done safely—should excite and empower the entire field of educators. And who knows what next week will bring. Another Redditor commented in that same post, “Eventually, the only thing capable of keeping up with advancements in AI will be AI.”
In the span of four months, this class of technology has succeeded and failed in epic fashion.
This is what we should become accustomed to—that we will be decreasingly impressed inverse to what’s possible.
At the EdSAFE AI Alliance (EdSAFE), we yearn for safe innovation in our edtech. The drivers depend on your experience. Educators are in crisis and the pandemic only accelerated the prior mass exodus from the profession. We can (and likely should) use this paradigm shift to accomplish two things:
Figure out how to pay educators commensurate with their peers in other fields (and thus attract a different caliber of talent to boot).
Employ emerging technologies such as the LLMs within greater AI to reduce human burdens in the field of education and empower the human connection with recovered productivity.
While we’re focused on making space for all affected, spanning policymakers, edtech (and mainstream) and developers, we need more supports that build capacity for frontline educators directly serving students and their families to protect their interests.
In order to put the necessary guidelines and guardrails in place with efficacy, policymakers and edtech vendors need accountability from their end users. To support a disposition shift, we will continue to publish and share resources, host events, speak at conferences and get to work impacting the thinking of policymakers on behalf of the field. Here are some questions that all education roles from district and building level administrators through classroom teachers could be asking.
How are we investing efforts to boost AI literacy across all roles in our schools and districts?
How are we updating our STEM curriculum to include interdisciplinary study of AI?
Are we putting systems and practices in place to properly evaluate current and forthcoming edtech purchases?
Do we have a keen understanding of all the resources available to protect student data privacy—even from the developers themselves using it to train subsequent AI models?