Part 1….Towards a market of trustworthy AI foundation models

Peter Wells
Writing by IF
Published in
4 min readJun 8, 2023

Foundation models, such as OpenAI’s GPT-4 and Google’s LaMDA, underpin many recent AI services. These models — which include generative artificial intelligence and large language models- make services like OpenAI’s chatGPT and Google’s Bard possible.

The UK Competition and Markets Authority (CMA) is carrying out an initial review of AI foundation models, focusing on potential competition and consumer protection considerations.

There are many possible intervention points where the introduction of new measures could reduce consumer harm. In our response to the CMA we argue that the most effective measures will be to require model providers to deliver trustworthy foundation models, including supporting documentation and resources, that are designed to help product teams build better services.

IF’s Responsible Technology by Design framework

This will ensure a market of trustworthy foundation models that can be readily adopted by service providers and deliver benefits for people, organisations and society.

Foundation models will be used to power an increasing number of services

Some organisations — such as OpenAI, Google and Meta — will deploy their own foundation models within their products and services. However, most organisations are likely to buy and use foundation models provided by foundation model providers. This could take several forms such as plug-in ecosystems, open source deployments, or APIs provided by SaaS (software-as-a-service) providers.

Foundation models themselves create a host of trust challenges

In the US, regulatory authorities– including the FTC- have started describing these challenges and which regulatory authority is taking responsibility for tackling them.

They include the risks that can dominate attention today, such as:

  • Bias and discrimination. Foundation models can produce biased or discriminatory outputs because of inputs such as training data, learning techniques, or the people that develop the models.
  • Privacy. Providing personal data to a model within a learning loop can breach data protection laws or lead to privacy breaches and harms if the foundation model outputs the personal data.
  • Errors. Foundation models can produce incorrect outputs, or ‘hallucinations’, because of the methods that are used to develop them.

But there are also new or increased design challenges such as:

  • Uncertainty. As foundation models are probabilistic they can give different answers to the same query from the same person at a different time.
  • Automation bias. The human bias towards trusting information from computers, is a known challenge that may be increased due to the errors and bias within foundation model outputs and the natural language interfaces, such as chatbots, that foundation models support
  • Latency, or delays. Foundation models can take months to train and be deployed, rather than the minutes it could take to alter a more traditional computer algorithm.
  • Skills fade. Foundation models are intended to accomplish tasks that humans could accomplish. Badly designed implementations may lead to some human skills fading away and reliance on the foundation model increasing. This can lead to various further issues such as errors being missed or an increasing need for a service to be resilient.

This technology is just one part of a service

Meanwhile, the embedded risk surrounding how foundation models are implemented by teams is often underestimated or overlooked.

Foundation models are just one infrastructural component in the full stack that delivers a service. There are many other risks, and the design challenges extend through the stack.

A diagram by Mark Hurrell of full stack service design for Sarah Drummond.

At IF, we believe that reducing these issues begins with a market of trustworthy foundation models

There are a large number of service providers who want to bring better services to their customers. But the design of foundation models, including supporting documentation and resources, dictates how product teams can overcome these challenges.

By placing requirements on providers to bring trustworthy foundation models that help service providers bring better services to market, the CMA and other regulators can ease the path to adoption whilst also providing better protection to consumers.

A trustworthy foundation model will help service providers overcome these design challenges and implement features, such as:

  • Transparency that caters to the needs of users of AI-powered services
  • Participation that empowers users to drive improvements to AI services and models
  • Verifiability that enables users to check claims in real-time

Making trustworthy foundation models a reality will require a multi-stranded approach

While there is some good, existing work on how to do build trustworthy foundation models — such as model cards and data cards — there is more work to be done.

There are ways we can begin to frame and address these important challenges, such as:

  • using people’s lived experiences, attitudes and behaviours to see the real-world impact and risks of these technologies
  • prototyping possible futures in order to better understand the effects of new technology, new services, and new regulations on society

There also are many things regulators and the government can do to help foundation model providers comply with requirements, such as:

  • developing guidance through consultations with industry and civil society
  • prototyping implementations of features and enablers to help providers build the necessary capabilities into their foundation models.

In the next post we will use our Responsible Technology by Design framework to explore transparency, participation and verifiability features in more detail and how to design for trustworthy foundation models.

Acknowledgements: This blog post has been co-written and edited with Imogen Meborn-Hubbard and Sarah Gold, with the help of the team at IF.

Interested in working with us? Know someone else we should be speaking to? Get in touch at hello@projectsbyif.com?

--

--

Peter Wells
Writing by IF

BlackpoolFC, books, tech, people, policy & delivery, realist. Hopes to make stuff work for everyone.