top of page
  • Writer's pictureEric Nentrup

The Week in Review: Sparing no expense along the path to halt and catch fire.


Image courtesy of Pixabay ©2023


In addition to the title of today’s post in reference to Jurassic Park and a popular AMC show in more recent years, I recently rewatched the 2008 Iron Man and it’s 2010 sequel, purely for entertainment—only for it to feel like research on AI in pop culture. Firstly, the films hold up well, reminiscent of a less chaotic world (or so I would like to think). Aside from nostalgia, however, these films hit differently now as Tony Stark innovates constantly while speaking aloud to his intelligent assistant JARVIS. And since there’s an AI land grab underway, of course there’s an AI investment company in Australia that has borrowed the Marvel moniker. Given what capabilities we’re seeing in applications being built atop emerging LLMs, this really does feel like life imitating art. I expect more of the same as I revisit familiar stories on occasion.


Powerful people are simultaneously claiming we need to both pause as well as accelerate our work in AI. That we’re having the next Manhattan Project moment. That we’re witnessing the advent a golden age for humanity. That we need guardrails. That we need to embrace LLMs in our work. That AGI is only 18 months away (still unlikely to many experts, but other predictions are more so). And though it’s a seemingly safe logical conclusion that it’s not GPT-4 or even the soon to arrive GPT-5 to be concerned about, but per se, GPT-12 and beyond, the one inarguable point is that this is all escalating faster than ever anticipated—at least for the layperson. There are now even emerging hardware breakthroughs such as IBM and The Cleveland Clinic’s quantum computing to accompany the supersonic algorithmic evolution.


We can’t equate the future roadmap for AI-enabled technology updates with anything else we’re accustomed to in terms of cadence and even those in the labs at Microsoft are continually confounded by what’s emerging. We need new metrics. And we need a shared sense of urgency as well as a direction for application. We need to sign a pledge and work together.

That considered, the conversations can’t be a slurry of polarized and opposing arguments. There has to be a middle. We Are EdSAFE, not EdSTOP. We cannot afford to be polarized or oppositional at this juncture. The temptation to be reactionary with the current state of AI in the headlines goes beyond having too many cooks in the kitchen. It’s more akin to too many hands on the steering wheel. Most folks can survive too much hot sauce in the chili, however when too many people fight for the steering wheel, all in the vehicle are in potential danger—not to sound too grandiose nor to echo the open letter published this week by Future of Life and endorsed by the likes of Elon Musk and Steve Wozniak. It’s not that any of those parties are wrong to express their alarm. In spite of their power and influence, they need better execution relative to their personality or fondness for hyperbole. Pausing is a massive misstep at this point of no return. Again, we need to sign a pledge and work together.


These are good examples for why we exist as an organization. We announced a Global Pledge at BETT this week. At EdSAFE we have been working explicitly to bring together a global diverse representation of experts in education, technology, and policy to unify for a common purpose: making certain learners and teachers are safe when we employ emerging technologies in our education tools. If the following topics resonate with you, then read the full pledge on the EdSAFE site:

1. Transparency and explainability regarding how AI systems learn or are trained

2. Data usage

3. Informed consent

4. Privacy

5. Safety and Security

6. Bias and learning limitations

7. Accountability

8. Human-in-the-loop

9. Verifiability

10. Stakeholder support

As you consider the above, we encourage you to sign the pledge and share it within your networks. We need thoughtful, engaged, and poised experts at the table to temper the arms race and capitalist priorities to shareholders. We need longterm thinkers and designers of solutions to consider the impact of unfocused and unbridled technological innovation specifically in our formal learning experiences so that the next generation can work symbiotically with our inventions.


We need YOU. It’s time.

SAFETY


ACCOUNTABILITY


FAIRNESS


EFFICACY

22 views0 comments
bottom of page