Eric Nentrup
The Week in Review: School's Out and Educators Continue Working on AI in Their Schools.

Along with schools around the world, the EdSAFE Fellowship pilot concluded this week. Our final call with the fellows was a wonderfully encouraging dialogue about how they are engaging on AI topics in their work and regions. It has truly been exciting to see what these education and technology professionals are working on, let alone getting to know their hearts and minds over the past six months.
The Fellowship, as you can imagine, was designed without an awareness of the ChatGPT Effect from this past autumn. And as such, we close our formal time together with new colleagues and friends, wishing them the best in their endeavors and a promise to continue the collaboration. EdSAFE is eager to support their initiatives as much as able, with all parties aware of the continuously evolving landscape as AI continues to permeate our attention and work.
Here are a few takeaways from the final fellowship meeting:
Leandro Folgar, President of Ceibal in Montevideo, Uruguay brought up how superstition is a theme in the current conversation, and that it’s important to make AI more understandable and packageable, aligned with other topics. From his position leading Ceibal, he suggests an adjustment to our disposition towards AI saying, “...even though you may not understand it, you do know how to position it in contact with various governing agencies, curriculum for AI in schools, resource development, and beyond.” He continued saying, what’s “…more scary than AI itself is when the humans leading the strategies don’t have rational and informed approaches.” Folger encourages others to help government officers become more aware, offering the approach that Ceibal has taken in Uruguay as a model.
Julianne Robar, Director, Metadata & Product Interoperability of Renaissance Learning, reflected from her own work upon enthusiasm for AI in education that isn’t reckless. She continued with comments that the field may be waiting to understand what compliance with AI factors looks like and the need for guidance for maintaining the priority of having education-specific humans in the loop and is encouraged that most people are onboard with that value.
Nneka McGee, Chief Academic Officer from San Benito Consolidated Independent School District shared how she has been looking at existing policy structures to see how to accommodate new needs rather than creating something from scratch for her colleagues and students. She talked about how the data privacy (and literacy) considerations are a critical issue for her teachers as they need to amend their edtech adoption behaviors to protect themselves and students.
Jeff Billings, a veteran computer science teacher and administrator from PVSD in Arizona, told the fellowship about their curricular design as well as professional development work around the usage of AI-enabled technology in their school district. Their acceptable use policy and other strategic updates are “heavy in terms of SAFE, responsible AI.” He continued saying, “We like the flavor of what Microsoft has done in some of the space down to standards.”
In a spirit of generosity and collegiality, these education experts offered to share their work products as examples for those trying to revise their own local policies and curriculum maps.
All told, the fellowship was a wonderful community of practice—not just among the fellows, but for the EdSAFE leadership as well. Their contributions are invaluable to our work in policy and awareness and a bellwether for future cohorts to expand their ability to professionally respond to the pending paradigm shift of a post-AI world and profession as educators.
Now, with their comments and positive mindsets, take with a grain of salt the different tenor of this week’s most interesting headlines:
‘Emergent Abilities’: When AI LLMs Learn Stuff They Shouldn’t Know – Virtualization Review
OpenAI CEO’s threat to quit EU draws lawmaker backlash | Reuters
EU, US move to quickly draft AI code of conduct, as experts warn of ‘extinction’ risk - ABC News
AI means everyone can now be a programmer, Nvidia chief says | Reuters
Nvidia unveils new kind of Ethernet for AI, Grace Hopper ‘Superchip’ in full production | ZDNET
The ‘Don’t Look Up’ Thinking That Could Doom Us With AI | Time
ChatGPT Is Cutting Non-English Languages Out of the AI Revolution | WIRED
11 NLP Use Cases: Putting the Language Comprehension Tech to Work - Grit Daily News
AI Poses ‘Risk of Extinction’ on Par With Pandemics and Nuclear War, Tech Executives Warn - WSJ
ChatGPT takes center stage as students ditch tutors in favor of AI-powered learning | VentureBeat
AI chatbots are coming to web browsers in a big way - The Verge
The same artificial intelligence that could revolutionize medicine can also be weaponized