AI adoption struggles when trust is missing
A couple of weeks ago, I spoke at the AI Fringe on a panel about trust and safety. One of the points I raised resonated strongly with the audience, so I wanted to write about it here.
This blog post explores why a lack of trust in AI creates adoption challenges for organisations and what to do. You can also watch a video of the panel online.
AI is everywhere, but trust isn’t
AI is increasingly everywhere. It’s in our mapping apps, fitness trackers, customer service chatbots. It’s making decisions in hiring, finance, and healthcare. But most people don’t see how these systems work. And that’s a problem.
When users don’t trust AI, adoption stalls. People hesitate to use AI-powered services, avoid engaging with them, or abandon them altogether. At the same time, blind trust in AI can lead to over-reliance, poor decision-making, and significant risks. Without the right balance of trust, organisations face resistance, reputational damage, and even regulatory penalties.
We’ve seen this play out with our clients. One public sector organisation has been trialing generative AI to speed up complex processing tasks. They’ve developed AI-enabled tools that can deliver reliable, useful results. But now, they face a new challenge: adoption. Their expert back-office staff, who handle critical applications, don’t fully trust the AI. Some worry about job security, others fear overreliance on an imperfect system, and many are concerned about maintaining the quality of essential data. Without addressing these trust concerns, the AI risks underutilisation.
AI enabled services are like icebergs, and that’s a problem
Most AI-enabled services are designed to be frictionless and seamless. But beneath the surface, hides complexity from data sources, models, biases, to risks.
Companies have built AI enabled services this way to make services effortless, following the mantra: “Don’t make me think!” But when AI is making high-stakes decisions, this approach backfires.
Hiding how AI works can lead to:
- Users overtrusting AI, leading to misuse or over-reliance.
- Users distrusting AI, creating hesitation, delays, and disengagement.
- Regulatory scrutiny and backlash when AI produces flawed or biased outcomes.
Take hiring tools. Some AI-powered recruitment systems were found to be biased. Companies using them had to backtrack, face public scrutiny, and rebuild their hiring processes. Similar issues are arising in finance, healthcare, and policing. Without transparency, these systems create risk for organisations and slow AI adoption.
To drive adoption organisations must lower the waterline
For AI adoption to succeed, businesses need to rethink how AI-enabled services are designed. Instead of hiding complexity, they need to lower the waterline — exposing more of what’s happening beneath the surface.
To earn trust, we should follow the principle: “Make me think!” This means designing AI experiences where users understand, evaluate, and can trust the system at the point of use.
This is so important, especially with the emerging rise of more complex uses of AI, like AI assistants.
How to build AI people will use and trust
There are 3 key design principles to ensure AI adoption doesn’t stall due to lack of trust:
- Transparency — Clearly indicate when AI is being used, how it works, and what data it relies on.
- Participation — Give people ways to challenge AI decisions, provide feedback, and influence outcomes.
- Consent — Ensure users understand what they’re opting into and can change their choices over time.
Practical steps to take
To encourage AI adoption, organisations should make AI easier to evaluate at the point of use. Here’s how:
- Expose decision-making — If an AI is recommending products, triaging customers, or approving loans, work towards making the reasoning clear to your users.
- Make AI visible — A chatbot that sounds like a human, should state that it is a bot. Make it clear so that your users can apply their judgement to the chatbot’s responses.
- Create redress mechanisms — If the system makes a mistake, users should be able to challenge it and get human support.
- Move beyond pop-ups — People don’t read terms and conditions. AI transparency should be built into the experience, not hidden in fine print.
There are lots of examples to get inspired by in our catalogue.
The advantage of AI people can trust
Organisations that prioritise trust will gain a competitive edge. They’ll avoid regulatory pitfalls, reduce risk, and build long-term user loyalty. More importantly, they’ll create AI-enabled services that people feel confident using.
At IF, we help organisations build AI that works for people. Trust isn’t automatic — it has to be earned and experienced.
We’re delivering talks and workshops to help teams rethink AI experiences. If you want to learn how to design AI services that drive adoption while reducing risk, get in touch.