top of page
  • Writer's pictureEric Nentrup

The Public Trust in a Crucible: Responding to the US Senate hearing on AI

Updated: May 26


The EdSAFE Shield standing for safety in AI

There is strong consensus that the development of AI should be regulated, though there are differences in how regulations should be applied. That’s the takeaway from testimony from OpenAI co-founder and CEO Sam Altman, IBM’s Christina Montgomery, and NYU Scholar Gary Marcus before a bipartisan US congressional committee this week. While his motives for OpenAI’s strategy are often questioned, Altman has been asking for regulatory oversight for some time. This week, Altman asked for government agency, safety standards, and instruments for auditing the work done by his own company and the other tech giants rushing their products to market attempting to avoid obsolescence. Yet such competitive actions have different implications with AI that weren’t a factor for these same companies in the days when shipping other software and hardware products built their valuations. Haste is the enemy and the technologists are talking about it openly while continuing to expand and escalate their product roadmaps.


Altman said this could be a printing press sort of moment that fundamentally changes the way we interact with each other across all domains, in a broad sense. However, where the printing press gave people the ability to not only share thoughts but to learn about divergent ideas, and led to the definition of new rights to privacy, the AI era is being ushered in by these tech companies with a concern for losing what is left of our privacy after nearly 30 years of online culture. Data privacy, security, and ownership may seem like a persistent topic for over a decade, but the playing field has now tilted even more drastically as the general public comes to terms with how our online lives are being used to generate data that is used to train the algorithms that undergird all AI models.


IBM’s Christina Montgomery showed surprising pushback when she stated that she did not support a new regulatory body and instead advocated for methods described in the EU AI Act that suggest “different rules for different risks”. This risk-based type of regulation, however, has been strongly criticized in the EU Parliamentary processes as it relies on the assumption of all AI being “equal and equally comparable”, without a nuanced approach. The flaws in the assumption of all AI being equal and equally comparable was reflected in comments by both Prof. Gary Marcus and Sam Altman in their requests to remember that innovation comes in all sizes and that it will stifle innovation and be counterproductive to require the same regulatory practice from all sizes and stages of organizations.


As if to highlight how dynamic and rapidly changing things are in the world of AI, in the same news cycle that Mr. Altman pleaded with US Congress for regulation, OpenAI also took away any training data transparency used by their systems, essentially going against the strong recommendations of not only Ms. Montgomery and Prof. Marcus, but also our community, the EdSAFE AI Alliance, which views transparency as one of the key issues to demand within the use of AI in education practice. This in concurrence with OpenAI’s announcement that the Microsoft-backed organization is about to release an open source version of ChatGPT.

While many lawmakers and journalists alike are continue publishing demonstrations of ChatGPT as a sleight-of-hand writing and public speaking gimmick, the technologists on the bleeding edge of AI development are begging for regulation and global perspectives with testimonies consistently pointing towards a need to not only lead with sound regulation, but also ensure these practices are seen as global benchmarks and lead to the introduction to global standards. As Altman said, it doesn’t matter where the tech is being developed, it can affect us anywhere. The key difference between legislators and engineers is that the latter became accustomed to the uncanny responses from large language models (LLM) way before ChatGPT was released in November of 2022. And the backend for that model has already been updated from GPT-3.0 to GPT-3.5 for free accounts and the exponentially more powerful GPT-4.0 and access to a valuable plug-in architecture for paid users. With an iOS native app in the wild and an Android one to follow soon, the user base of 100M is set to increase exponentially in the weeks ahead.


Tellingly, and despite garnering over 30,000 signatures from supporters—including Gary Marcus—Altman pushed back strongly on the relatively recent idea of a pause of development on AI testifying that OpenAI isn’t currently training GPT-5 and wouldn’t be launching anything new on the backend in the near term while also calling into question whether global competitors would then be gaining an edge can be confounding for experts and the general public alike. Whilst the pause may not be ideal for the OpenAI roadmap, it appears to reflect the desires we have seen throughout school districts and countries globally to slow down and prepare for the careful and planned implementation of AI in schools. Pausing implementation or development can be a legitimate response if that time is then used to develop comprehensive action plans, regulations and communication. However, if the pause is actually due to an overwhelmed company and with no clear plan to use this time, the same issues will still be waiting at the other end of the pause.


Therefore, we should all expect gimmicks from the lesser-initiated to continue with crude demonstrations of both the promise and peril of AI in the short term. But if—like with the similar warnings about social media a scant 10-15 years ago—lawmakers don’t follow through with regulatory interventions, the outcomes will be even more dire and more difficult to manage for the “humans in the loop”. Altman made that point in his testimony, one that was extremely well presented in the 2021 documentary, “The Social Dilemma”. That film, along with other films such as “The Great Hack” and countless think pieces, scholarly papers and more recent research findings should raise the urgency to get the guardrails in place that Altman and company are asking for.


Urging lawmakers to provide this sort of oversight is the work of EdSAFE, our fellows, and members and partner organizations. Responding to the US Senate hearing on AI, we see parallels to the top 10 policy issues we have identified as part of the EdSAFE Global Pledge, and check back soon to see our upcoming Policy Guidelines Framework.

22 views0 comments
bottom of page