top of page
  • Writer's pictureDan Ingvarson

Concepts, Context and Transparency: How 1+9 can equal 19

Dan Ingvarson, Senior Technical Policy Consultant, EdSAFE AI Alliance


This is part two of a four part series. In part one, we make it clear that LLM AI is both here and going to expand, creating an impetus to policy action.


“The president was stating alternative facts” - Kellyanne Conway


Creating AI consistency and accountability

Image Credit: Wix

When grocery shopping, the mandatory ingredients lists found on food packaging adhere to regulatory standards to inform the buyer what is in each product. This helps consumers know more precisely what they are purchasing and, in the case of food sensitivities or allergies, whether ingredients are present that they are trying to avoid. However with AI, the user doesn’t currently have such transparency regarding how and why the model functions as it does.

For example, let’s consider the datasets used when training a Large Language Model (LLM). There is currently neither a requirement for transparency, nor are there any standards regarding trusted and recognised datasets from which the AI can learn and evolve. To avoid confusion, it is important to know that LLMs like the one at the one at the foundation of ChatGPT have been trained mostly by basic web crawls and books.


It is helpful to understand that an LLM’s potential for accurate feedback is derived initially from the resources from which it learned during its training: the connections it saw, the relationships between words and the probabilities of concepts. This has implications with regard to how comprehensive and trustworthy those training data have been. As these data sets are a reflection of the society they are found within, the topic of inherent and intentional biases must be addressed. Can the AI’s answers be unintentionally limited, blocking out certain information or user types? Can entire sets of facts or relationships between concepts be essentially ignored, erased or created whether by intent or omission?

It is important not only to be working with good ingredients, but also to have the necessary policy protections in place across multiple levels of AI development and implementation. These could, for example, cover additional training requirements, context safeguards, or transparency requirements and must be urgently discussed



Nefarious LLM model creation

It seems inevitable that well-resourced special interest groups will be attracted to the possibility of creating LLMs that reflect their particular worldview and further their agenda. With barriers to entry decreasing, a risk policymakers must factor into their discussions is the possibility of LLMs being deliberately skewed to present an alternative grouping of related concepts and opinions. It is likely that there are sections of some societies which do not see it in their interests to have a well-balanced, historically-agreed, fact-based LLM in use across education and society more generally.


It seems inevitable that well-resourced special interest groups will be attracted to the possibility of creating LLMs that reflect their worldview and further their agenda.

The effort required to select and cull the potentially harmful training data from within such large training datasets is high. This means there is still some time until deliberately disingenuous, standalone LLMs exist. However, the very real potential of this risk makes the case for requiring transparency of training data more relevant and urgent.


Regulating sources and demanding transparency

The Pile AI LLM contents list
'The Pile' contents list courtesy of Source. https://arxiv.org/pdf/2101.00027.pdf

For this reason, it is imperative that policy discussions include regulating the transparency of the LLM ‘ingredients list’ and defining simple terms for the point of creation of and implementation of Intellectual Property (IP) rights. Massive 800GB open datasets like The Pile exist and include a full list of contents.


However, unlike LLMs trained on “The Pile”, ChatGPT only gives us a general idea of what training data has been used. Some organizations are remaining resistant to exposing the source data, suggesting that they see the training data no differently than their other corporate intellectual property. However, there are currently only a finite number of dataset training sources available and any advantages this approach gives companies will progressively move to the reinforcement stages of training when further human and context data can be added in order to optimize the offering.



The LLM 'Magic' is just a map of our real world of concepts

We see the output of tools like ChatGPT and feel a sense of wonder at their ability to write so well and so pertinently. The systems do seem intelligent and uncanny in producing relevant information. With LLMs, however, it is math rather than magic that is at work: words are treated as objects, probabilistically related to other words in phrases from the training datasets, which in turn create maps of linkages which we can think of as concepts. The entire number of times each word is seen next to each other word is recorded as a probability. This is then augmented with other relationships built from the user feedback of humans as they correct incorrect or inaccurate answers. We have all aided in the improvement of AI systems and contributed to their concept models whether rebutting an autocorrect suggestion in Google Docs, telling Siri “no” when wrong, or clicking on traffic lights in a Captcha. In each of these interactions, the user is helping complete the map and improve the model. This is an important component of how LLMs in particular have become so human-like in their interactions.

The result is a “concept map” of all human-created text due to the way that we convey meaning when communicating with language where groups of words appear in association. Even though we cannot see the concept map inside the LLM, probabilities are associating certain sets of words. “Dog, Cat, Care” is a very different concept to “Dog, Cat, Fines”. The texts in which each set of three words were found will have vastly different associations, and this is how we create meaning. This set of related concepts can also provide “context” for use, which is what increases the accuracy and understanding in our conversations, both human-to-human and now human-to-LLM. These likely connected concepts making up context in our conversations are central to the development of sophisticated and reliable LLMs.


Context is King

Relative to their predecessors, new LLMs are now performing better than expected when answering questions beyond the data informing their training—even when they were given just a small amount of context. Context in the digital LLM sense has greater ramifications than just making it much faster and simpler to communicate. It appears that context overrides part of the probabilistic network and changes the level of an ‘LLM’s probable truth’.


Context in the digital LLM sense has greater ramifications than just making it much faster and simpler to communicate. It appears that context overrides part of the probabilistic network and changes the level of an ‘LLM’s probable truth’.

Here is an intentionally simplistic example of the problem: I tell the LLM that 1+9 equals 19 and remain insistent that it needs to believe that 1+9= 19. Then next time I ask it what 1+9 is, it will happily respond with 19. In this context the most probable right answer is 19 because the context carries weight for the current AI conversation. And this is not just a hypothetical example as proven in the following excerpts from a ChatGPT thread:


Changing the probabilistic outcome: An example from ChatGPT

This appears to be reshaping the probabilistic network and creating an alternative version of reality that the model doesn’t necessarily require to be aligned with fact. LLMs can and will learn from the humans correcting their answers and the content that they are receiving. This has far reaching implications—not only for those using and relying on these models, but also for the possibilities of any policy to be effective since it is the users themselves that are changing the tool as opposed to the tool erroneously modifying itself. It will also be necessary to consider how this human-fed relearning could also be maliciously taken advantage of or abused.


What happens if that concept map doesn’t speak your language?

If you are multilingual, you know there are not only mechanical differences in grammar or syntax, but that there are examples where words and concepts don’t align across languages or where one language affords a different level of nuance. We need a moment of pause to consider where the LLMs for other languages (or other concepts) are coming from and how they relate to their current as well as forthcoming peers. The size of available linguistic data could be used to dominate certain AI developments and needs to be considered when discussing bias within education contexts specifically.


The size of available linguistic data could be used to dominate certain AI developments and needs to be considered when discussing bias within education contexts specifically.

The demand for large-scale, high quality training data coming from text on the internet as well as a critical number of human users could mean that the next generation of LLMs will only ever build concept maps for the top small group of languages and that all possible concepts within those models will be dictated by the structures and limitations of these languages, with no regard for cultural and linguistic diversity. And in a very short time, cultural and linguistic concepts and learning types that are not well-represented in the datasets that dictate an LLM’s behaviors will be left behind. In effect, the training data that is available to us reflects a less than fully representative or inclusive past, and will not serve us well in designing for a more inclusive, pluralistic, and equitable present or future. The potential effect that non-native structures have on the language models also needs to be explored with significant research and development investment. This highlights a foundational issue of bias both within LLMs and the data being trained into them.


We may, in fact, already be too late when considering these aspects. GPT has already grown exponentially faster than any other prior technology—not just in data parameters and values, but in end user adoption as well . Policymakers may, therefore, not be able to adequately address the gap between linguistic leaders in the existing LLM space, but could provide much needed guidance on inclusive practices such as requiring sufficiently good translations.


Policy

There are three main categories of AI regulation and governance principles that are appearing in the current discourse on AI in education:


  1. Issues which have been addressed or are no longer relevant

  2. Issues which have current pathways for enforceability with current LLM’s

  3. Issues which will require further investment and time to inform policy positions and enable the creation of appropriate enforcement or other review mechanisms


As we addressed in part 1 of this series, it is futile to discuss retroactively regulating what these systems have already been trained on given that most of the existing LLMs have already been through extensive training. Instead it is important to focus our energies on reviewing where regulation and policy can and should have an effect. For example, it is of the highest priority that policy and AI regulation discussions address transparency and potential measures regulating additional training regarding the ‘ingredients list’ that has been used as well as how comprehensive or trustworthy these datasets have been. This can safeguard against exposures related to malevolent or skewed LLMs.


Equally important is the topic of context in the digital LLM sense, and ways that context can be manipulated or overridden so that an LLM’s ‘probable truth’ is changed or topics which were blocked as safeguards can suddenly be accessed. Related to the idea of information being changed or limited is the topic of linguistic models influencing the way that information can be accessed or even the type of information available.


It is incumbent on developers, policymakers, and by all means end users to better understand these categories of issues as they consider the important role they can play as trainers of these LLMs, especially in the education context. It is vital that regulation and policy address these key issues and put effort both in areas where there are known pathways for enforceability, and where further investigation and review is required.



 

The EdSAFE AI Alliance is developing a policy framework, which aims to support current and future policy discussions. Make sure to follow along for updates.


In part three we look at how context and few shot learning could be used in the creation of education specific apps and highlight that contexts are the way that users and therefore market specific or new application specialization will be implemented. We explore how policies can be applied to multiple parts of the LLM in order to be effective and how changes of context which trigger deviations in LLM outputs can be adequately covered in necessary policy developments.



362 views0 comments

Recent Posts

See All
bottom of page