Lauren Lopez
The AI Powerplay: The trillion dollar race to be the operating system of the future
Dan Ingvarson, Senior Technical Policy Consultant
In parts one and two we explored why LLMs will become common, how training data is enabled, how LLMs work and the contexts within which they function. From this, we proposed a conceptual shift from policy on a single AI to policy points needed throughout entire AI ecosystems.

“There are two options: adapt or die” - Andy Grove, Intel.
The trillion dollar ecosystem
The current Large Language Model (LLM) landscape is only the first step on the path towards an ecosystem approach, which will dominate the applications we use and the way that they are built. Leading LLMs are already starting the race to essentially become a new operating system; the platform upon which multiple applications and products are built. There will be fierce competition to own this future platform and as the commercialisation of them occurs, we predict that they will follow the money and try to build their own app stores or ‘ecosystems’ to keep new apps as customers and create a new type of 'walled garden'.
These new operating system platforms will wield immense power and directly influence what information is available to all the apps that are built on or connected to them. The way that the LLMs are trained, the bias built into them either structurally, conceptually or linguistically, and the potential influence from individuals wanting to design their own realities becomes then a critical issue for policy.
The way that the LLMs are trained, the bias built into them either structurally, conceptually or linguistically, and the potential influence from individuals wanting to design their own realities becomes then a critical issue for policy.
Building on this new ecosystem:
But what does creating an application on this new platform look like? We can make some educated guesses based on past approaches. There will be groups of software which work together, linking LLMs into tools currently in use. This will likely work with three components, each of which will have policy implications.
A Core LLM
The Specialisation Context (fine tuning)
New Applications

The above is an oversimplified example and therefore will not be accurate for all circumstances. ChatGTP, for example, is both the context and an application and apps can just reuse the core platform without an additional context.
In this example, the core platform is the probabilistic Large Language Model (GPT-3), the ecosystem which makes the LLM usable within a context is ChatGPT, and the applications which employ either of these are using this new ecosystem or operating system. There are already over 300 application using GTP-3. A specific education app, Math GPT, is an example of a tool which extends ChatGPT. In order to effectively regulate these new ecosystems, it will be necessary to have policy enforcement 'flowing through' the systems and thus being applied at all three points of this model.
A survey of 1,000 companies using ChatGPT found that 48% were already seeing savings of $50,000 since they started using it. In a major update (and example of specific context) it is also now possible to query an LLM which then “decides” to undertake actions with other systems. This is shown in Adept AI , a solution that enables the LLM to control your computer and browser to update and things like Salesforce, Google, etc. The potential impact on work when standard computer applications can be instructed by an LLM, the projected savings, and the multitude of new uses being built on top of the LLMs makes this ecosystem extremely valuable to the tune of trillions of dollars. And the race is on to become the leader in this space.
The LLM AI will be the platform upon which others build their tools
New Generative AI backends (or LLMs) should now be viewed like a platform. That is, a place where companies can create new software applications and where users get their work done. These platforms will initially have a great impact on knowledge and services based workers and provide the function that factories did in the industrial revolution. They portend a huge streamlining of routine work within our services and information based systems. There will be automation of factors previously requiring high levels of expertise and time.
These platforms portend a huge streamlining of routine work within our services and information based systems. There will be automation of factors previously requiring high levels of expertise and time
New AI services will be deployed in everyday applications and LLMs will be behind almost every tool used in the services and professional industries. For example, an admin form or issue tracking system currently needs a human to review the contents of a message and then sort and categorise it for further action. LLMs will be able to streamline this by being the first technology which can interpret what is being said, then run an automatic categorisation and update the admin system via an API with no human interaction necessary.
What that means is that less people will be required to do certain tasks that are currently normal within education administration. Bill Gates says there is a deep concern for the destruction of white collar jobs. The profound changes to the educational ecosystem including to edtech companies mean we need to start planning now to ensure their use is equitable and safe. Some fundamental decisions will need to be made within the education industry determining whether our future is one where we are more efficient or more effective. We need to proactively build policy that drives development towards more effective education, because inaction will create a market driven commodification.
We need to proactively build policy that drives development towards more effective education, because inaction will create a market driven commodification.
Few Shot Learning:
The concept of Few- Shot Learning will further drastically reduce the time and cost related to training and creating new AI services. By ‘training’ the AI systems with extremely small concept sets related to very specific contexts to extend an existing LLM, new and varied services with their own characteristics and customisations can spring up quickly from the same LLM base.
First Major Impacts in Education
The current media focus on ways students can use or abuse the automation of tools like ChatGPT in their education environments to, for example, write essays, is only one part of the story. There are many quality discussions underway around the use of ChatGPT, and the access considerations for use in schools with even the IB opens door to use a a source for assessments, which can be important and useful.

Student-facing tools, however, don’t actually make up the largest segment of systems used and needed in education environments. In fact, some of the largest effects of AI in the education ecosystem will be felt by the workforce and administration sides of education. The true impact in education from AI tools will be in the so-called knowledge workforce where developments are more inline with efficiencies seen in other sectors. This knowledge workforce comprises central offices with administrative or HR functions, building facilities management, internal communications, briefings, meetings, project work and related content creation. All of which are more similar to general industry wide functions and, therefore, more likely to have specific contexts created sooner than complex curricula with year/subject/location-based educational requirements.
Streamlining some of the more menial administrative tasks could result in significant resource savings across education, which in turn could see a larger investment into quality education resources and systems than is currently possible.
Streamlining some of the more menial administrative tasks could result in significant resource savings across education, which in turn could see a larger investment into quality education resources and systems than is currently possible.
A three-phased approach for implementation:
Education authorities need policies that separate out issues between students and the different roles in their workforces with appropriate product selections to be able to adequately address the enormity of challenge LLMs provide. To do this, we suggest a three-phased approach.
In the first phase, risk management systems should immediately be designed. This can include pausing the use of these technologies within schools or the use with targeted oversight.
In phase two, policies are set and tested, and new practices to harness LLMs for our purpose (rather than accept what is given to us) are developed. In this phase, progressive adoption and LLM improvements can be deliberately deployed as they mature and our policies catch up, providing there are enabling conditions for testing.
Phase three is where the innovations enabled are weighed against the technical options for implementation and protection. Long term controls, which can enable jurisdictional decision making, will include development of sovereign LLMs under jurisdictional control as well as market driven new applications.

It is important to ensure real world investigation of impact, with deliberate trailing of LLM’s in certain areas, to help address existing and long term systemic issues. Testing of approaches to implementation and enforcement of measures which keep the use safe, accountable and fair should include procurement as well as regulatory and policy guidelines. The results of phase two should assist both market based entrance and education authority policy to balance the benefits of progress with risks for both the education workforce and learners.
Education authorities have the responsibility to shape the future and not just respond to what the market provides.
There are good reasons why we have a curriculum and have protective fences or security systems in schools: this is an area where both politically and practically we want to progress with careful consideration and an understanding of measures we are introducing. The responsibility of education authorities also encompasses new technologies. Although relatively new in the education space, LLMs are here, are real, are not going away, and are being adopted faster than any technology ever. They have already been responsible for job changes in over 50% of 1000 businesses surveyed. As pointed out in part one, we must act now to answer key questions. Education authorities have the responsibility to shape the future and not just respond to what the market provides.
Transparency, flow-through policy and meaningful implementation
LLMs are ushering in the next era of the operating system and the battle to become the leading platform will be fast and have major implications on the access to information. This race to own this next stage of the education and knowledge and services ecosystem will have major implications on policy and require a ‘flow-through’ policy approach. It will be imperative to focus initial policy discussions on where the new tools will have the most impact and effect within the education landscape: on knowledge and service workers and processes where content is generated and systems can be streamlined.
Policy needs to be framed with a longer term vision, linking what society wants from the future, and not what LLMs can currently do, as we are at the start and they will evolve. By taking a three-phased approach to the implementation of AI in education, we have the chance to deploy and react in meaningful ways that allow for the safe and equitable use of newer technologies and give us the time to understand the implications of the new operating system war and demand the necessary transparency to limit bias and restrictions on access to information.
The EdSAFE AI Alliance is developing a policy framework, which aims to support current and future policy discussions. Make sure to follow along for updates.