Part 2 ….Designing for trustworthy foundation models

Peter Wells
Writing by IF
Published in
5 min readJun 8, 2023

In our last post, we argued the CMA should regulate for a market of trustworthy foundation models that are designed to help product teams build better service for consumers. But how do we design for trustworthy foundation models?

At IF, we help organisations like Google, Meta, Deepmind, Mozilla, Citizens Advice, Blue Cross Blue Shield and many more design trustworthy products and services when using emerging technologies, such as foundation models.

In this post we will demonstrate how our Responsible Technology by Design framework can be used to understand core characteristics of trustworthy experiences, which in turn can define the requirements of technology infrastructure, like foundation models.

Foundation models will be used to power an increasing number of services

There are a large number of service providers who want to use AI to bring better services to their customers. But the upstream design of foundation models, including supporting documentation and resources, dictates how effectively and responsibly product teams are able to use them in their services.

While there is some good, existing work on how to do this — such as model cards and data cards — we believe there is more to do.

Designing for trust is key because the trust decisions people make every day — about which relationships to nurture or avoid — apply to services, brands and organisations too. One of the methods IF uses to help our clients is the Responsible Technology by Design framework, which outlines the core characteristics of trustworthy services.

IF’s Responsible Technology by Design framework

The following three examples — transparency, participation, and verifiability — show how these principles can be used to surface important considerations for those designing with AI in their services, as well as define criteria for those providing the foundation models themselves. The examples use fictional prototypes based on needs we have seen first-hand in work for our clients.

Example 1. Transparency needs to cater to different information needs

Today, solutions such as model cards can provide transparency information about foundation models. These are used by product teams to decide which ones to use. However, there is a lack of transparency information tailored to end-users.

Transparency needs to be provided at point of use, in a way that is appropriate to a user’s understanding and context. Current solutions, such as dedicated transparency portals, or repurposed specialist-level information, do not meet these needs.

A key prerequisite for better experiences would be for foundation model providers to themselves provide transparency information. This could allow product teams to surface accurate information at various points in their services, in ways that are tailored to a user’s context and needs.

For example, trusting users may need little or no transparency information whereas those less trusting may need to progressively explore more detailed information.

These considerations point to the need for foundation model providers to offer accurate, flexible transparency information upstream, so that product teams can serve transparency information to users how, when and where they need it.

This (fictional) service automatically switches your mobile provider. The service caters to different transparency needs by progressively disclosing information about switches and the reasoning behind them. This enables a high level of transparency without overwhelming users.

Example 2. Participation needs to empower users to drive improvements

We have seen participation become an increasing need in our work — especially for people in vulnerable situations.

People need to be able to participate in decisions about how services, and underlying technologies like foundation models, are designed and used. This participation needs to allow people to participate in near real-time, as well as on longer timescales.

If someone is using a service that includes a foundation model, they need to be able to provide feedback about an error. And in order to affect real change, this feedback needs to reach the foundation model provider.

Similarly, while using the service, they need to be able to discover how to participate in longer-term and more deliberative processes, such as co-creation activities or data governance committees. This brings into play considerations as to what might incentivise this deeper form of participation.

Foundation model providers could facilitate meaningful participation by providing APIs to capture structured feedback from end-users. They could also establish more involved ways for end-users to participate in deciding how the foundation model should be designed and used.

Coordination between foundation model providers and the product teams designing services could ensure that meaningful feedback loops — ones that pass on feedback to those capable of making the change — are built directly into the service.

This (fictional) service is for participants in longitudinal health research. The service lets participants discover more about how data about them is used. This is based on research which showed a key incentive for ongoing participation is understanding the real-world impact of your contributions.

Example 3. Verifiability requires checking claims in real-time

Verifiability means that claims can be checked or demonstrated to be true, accurate, or justified.

Foundation models make various claims: about inputs (how they use data provided by end-users), outputs (information they output is reasonably truthful), or how they process requests.

We have seen first-hand in our research that the need to verify these claims is growing in prominence. This need is particularly expressed by people most at risk from consumer protection and other rights issues.

To support this need, foundation models will need to provide users with the ability to verify outputs. This could include solutions like watermarks, explanations of how an output was produced, or logs of how data was processed, shared and used.

Providing the highest level of verifiability to end users will require use of technologies like tamper-evident logs (currently used in website security certificates, in the context of Certificate Transparency and by services such as Sigstore). These are audited by independent third parties and their widespread use would be transformative across different sectors, especially ones where governance and accountability are critical.

A fictional prototype of a hospital discharge letter, that does not overwhelm patients with information but gives them the option to see how data about them was used by accessing a log. The log provides a path to verify the claims it makes.

Making trustworthy foundation models and AI-powered services a reality

There is a lot of work to be done before features like these become tangible in the products and services we use every day. But we remain optimistic that, through the sustained application of research, design and discussion, we can harness foundation models in ways that provide better outcomes for business, people and society.

Acknowledgements: This blog post has been co-written and edited with Imogen Meborn-Hubbard and Sarah Gold, with the help of the team at IF.

Interested in working with us? Know someone else we should be speaking to? Get in touch at hello@projectsbyif.com?

--

--

Peter Wells
Writing by IF

BlackpoolFC, books, tech, people, policy & delivery, realist. Hopes to make stuff work for everyone.