Earn trust by proving what you say
We believe that making organisations and services prove they can be trusted is the best way to build better futures. Particularly when the world is becoming more complex, from AI to security breaches.
Apple thinks so too. In their latest marketing and product development, Apple link consumer trust with their ability to retain their market share in the AI race…
Provable claims are going mainstream
At IF we believe these provable systems will become mainstream, because -
- People increasingly expect this kind of verifiability
- Regulation is taking us there
- It’s a brand differentiator
Helping teams understand, prototype and build provable systems is something that we’ve been doing since our inception ~8 years ago.
One of the challenges, however, is that this stuff gets very technical very quickly. And there have been few examples in the public domain that we could point to.
Apple’s provable privacy promise shines a light on provable systems
But in early June, when Apple announced their “Apple Intelligence”, they also shared their “verifiable privacy promise”.
The software that enables their privacy promise is an application of the kind of provable system we like to work on.
So I thought this was the perfect opportunity to shed some light on the way we like to practically demonstrate trustworthiness in products and services…
Provable systems complement regular audits
Let me quickly set some context.
The most established approach to creating a provable system is auditing. A trusted third party organisation is given access to the underlying technology and organisational layers of a service. For example, for TikTok’s Project Clover programme a third party is auditing TikTok to confirm that data derived in the EU does not go to China.
This kind of auditing is necessary, but insufficient, especially for more complex services where people’s expectations, or the technology, change.
Spot audits only tell you what’s happening in one moment in time. That’s not ideal when you are working with very sensitive data, or complex data or with a system that changes over time.
Provable systems create real time, tamper proof logs
There is a complementary and emerging approach to audit. Where teams build support for it directly into the systems to be audited.
Think of it like software for automated audit that works in real time, to continually monitor the performance of a system. It helps create transparency that is verifiable.
The software creates tamper-proof logs that can be used to independently verify that the system is operating as it should. You can design these logs to record things like data usage or access claims, depending on your user needs and organisational context.
Over the years we’ve done several projects on this with Google and others, in the product development and strategy for this kind of software.
One of the many things that are useful about these logs is that, with service design, they can alert people or systems when something does not look right.
“In a world where fines are as big as they are for getting stuff wrong, where the organisational reputational risk from breaches is even further beyond that, having something [organisations] can rely on that will tell [them] almost as soon as it happens what you don’t want to happen, it will be hugely valuable.” — Data & Analytics specialist in Assurance Leader, Auditing Firm
At IF we like to use those logs so that services better meet user needs
We used provable claims in our work with DeepMind Health to rebuild trust in their healthcare service. We collaborated the engineering team to decide what information the logs should record, and how that information could be used within the user experience to earn trust.
We did this through:
- Design research to uncover the user needs for trust
- Interviews with staff to understand the organisational needs of DeepMind, the NHS and the regulator
- Prototyping to test our assumptions about the technology itself
We used the logs as a new material to design with. We revealed information from the logs at point of need, within the experience, without getting in the way of clinicians or patients.
Take a look at this film that shows how transparency information from a log forms part of the user experience.
Our work with DeepMind Health enabled them to launch their service in 5 hospital sites in the UK. The service helped clinicians diagnose and treat patients more effectively, saving a life in its very first week.
Trust is earned through reputation, communities and experts too
Designing with transparency logs means that information can be surfaced in different ways to meet needs of many different user groups. From end users, to professionals, regulators to civil society.
And the needs of those users can be met by a range or combination of providers, because parts of the logs are publicly available. That helps to increase overall trustworthiness because multiple providers can corroborate the status of a system at any one time. A consequence of this is that auditing becomes cheaper to run.
People need support
The other benefit is that it shifts the burden of understanding from individuals, to the wider ecosystem. That means more people have more opportunities to spot things that might not look quite right, ultimately helping to safeguard the system.
An active ecosystem of users is a great defence against misuse.
Consider making your claims provable
If you are running a service that uses sensitive data, or is the equivalent of national infrastructure, or perhaps has some really advanced technology — consider making it provable.
Meet enhanced accountability standards, while addressing regulatory and public concerns about data use.
Book a call with us to talk through how you can design provable claims in ways that help earn and maintain trust.