Earn trust by proving what you say

Sarah Gold
Writing by IF
Published in
5 min readJul 3, 2024

We believe that making organisations and services prove they can be trusted is the best way to build better futures. Particularly when the world is becoming more complex, from AI to security breaches.

Apple thinks so too. In their latest marketing and product development, Apple link consumer trust with their ability to retain their market share in the AI race…

Provable claims are going mainstream

At IF we believe these provable systems will become mainstream, because -

  • People increasingly expect this kind of verifiability
  • Regulation is taking us there
  • It’s a brand differentiator

Helping teams understand, prototype and build provable systems is something that we’ve been doing since our inception ~8 years ago.

One of the challenges, however, is that this stuff gets very technical very quickly. And there have been few examples in the public domain that we could point to.

Apple’s provable privacy promise shines a light on provable systems

But in early June, when Apple announced their “Apple Intelligence”, they also shared their “verifiable privacy promise”.

The software that enables their privacy promise is an application of the kind of provable system we like to work on.

Photograph of the Apple Intelligence launch, showing a screen that says “verifiable privacy promise”
Apple Launches ‘Private Cloud Compute’.

So I thought this was the perfect opportunity to shed some light on the way we like to practically demonstrate trustworthiness in products and services…

Provable systems complement regular audits

Let me quickly set some context.

The most established approach to creating a provable system is auditing. A trusted third party organisation is given access to the underlying technology and organisational layers of a service. For example, for TikTok’s Project Clover programme a third party is auditing TikTok to confirm that data derived in the EU does not go to China.

This kind of auditing is necessary, but insufficient, especially for more complex services where people’s expectations, or the technology, change.

Spot audits only tell you what’s happening in one moment in time. That’s not ideal when you are working with very sensitive data, or complex data or with a system that changes over time.

Provable systems create real time, tamper proof logs

There is a complementary and emerging approach to audit. Where teams build support for it directly into the systems to be audited.

Think of it like software for automated audit that works in real time, to continually monitor the performance of a system. It helps create transparency that is verifiable.

The software creates tamper-proof logs that can be used to independently verify that the system is operating as it should. You can design these logs to record things like data usage or access claims, depending on your user needs and organisational context.

Over the years we’ve done several projects on this with Google and others, in the product development and strategy for this kind of software.

Diagram describing how hashes within log entries are automatically checked, and that someone is sent a notification if the log fails validation.
Software automatically validates if entries in the log have been changed or removed

One of the many things that are useful about these logs is that, with service design, they can alert people or systems when something does not look right.

“In a world where fines are as big as they are for getting stuff wrong, where the organisational reputational risk from breaches is even further beyond that, having something [organisations] can rely on that will tell [them] almost as soon as it happens what you don’t want to happen, it will be hugely valuable.” — Data & Analytics specialist in Assurance Leader, Auditing Firm

At IF we like to use those logs so that services better meet user needs

We used provable claims in our work with DeepMind Health to rebuild trust in their healthcare service. We collaborated the engineering team to decide what information the logs should record, and how that information could be used within the user experience to earn trust.

We did this through:

  • Design research to uncover the user needs for trust
  • Interviews with staff to understand the organisational needs of DeepMind, the NHS and the regulator
  • Prototyping to test our assumptions about the technology itself
Photograph of a series of pink post it notes with user needs for transparency written on them.
A selection of user needs for transparency information.

We used the logs as a new material to design with. We revealed information from the logs at point of need, within the experience, without getting in the way of clinicians or patients.

A photograph of a fictional service on a mobile phone. The service is for Eastbridge Hospital, and it shows information that is logged as part of a tamper proof history.
An example prototype that we used to learn where and how verifiable claims could best meet user needs.

Take a look at this film that shows how transparency information from a log forms part of the user experience.

Our work with DeepMind Health enabled them to launch their service in 5 hospital sites in the UK. The service helped clinicians diagnose and treat patients more effectively, saving a life in its very first week.

Trust is earned through reputation, communities and experts too

Designing with transparency logs means that information can be surfaced in different ways to meet needs of many different user groups. From end users, to professionals, regulators to civil society.

A mockup of a service for information governance officers that notifies them if log entries change.
A fictional service that sends a message to an information governance officer whenever a check is performed, to let them know the results instantly.

And the needs of those users can be met by a range or combination of providers, because parts of the logs are publicly available. That helps to increase overall trustworthiness because multiple providers can corroborate the status of a system at any one time. A consequence of this is that auditing becomes cheaper to run.

People need support

The other benefit is that it shifts the burden of understanding from individuals, to the wider ecosystem. That means more people have more opportunities to spot things that might not look quite right, ultimately helping to safeguard the system.

An active ecosystem of users is a great defence against misuse.

Consider making your claims provable

If you are running a service that uses sensitive data, or is the equivalent of national infrastructure, or perhaps has some really advanced technology — consider making it provable.

Meet enhanced accountability standards, while addressing regulatory and public concerns about data use.

Book a call with us to talk through how you can design provable claims in ways that help earn and maintain trust.

--

--

Sarah Gold
Writing by IF

Designing for trust. Founding partner and CEO @projectsbyif