The Trust Paradox: Why AI is Like That One Friend Who’s Always Late

Ratiomachina
Brass For Brain
Published in
5 min readJun 21, 2024

Ever had that one friend who’s always late? You know, the one you trust to show up eventually, but you never quite know when. AI is kind of like that friend. We spend a lot of effort trying to trust it, but it’s tricky. Let’s unpack why this is and where we can go from here. Buckle up — this is going to be a wild ride through philosophy, psychology, and a bit of tech wizardry!

We’re pouring massive resources into figuring out how to make AI trustworthy, with the goal of boosting its adoption. Sounds simple, right? Spoiler alert: it’s not. Traditional trust involves people. Applying this to machines is like trying to hug a cactus — it’s awkward and doesn’t really work. We tend to believe that machines can be reliable, but trust? That’s a whole different beast.

So, how do we redefine trust and reliability for AI? Think of it as confidence in the system’s ability to perform, knowing it’s got a mind of its own (sort of). AI isn’t predictable like your morning coffee; it’s more like that espresso machine that occasionally shoots out a double shot when you wanted a single. We need standards that say, “Here’s what AI can do, here’s where it might throw a curveball.” Of the three AI Trustworthy Musketeers — transparency, accountability, and explainability — only one may in fact be key. There is not sufficient evidence that transparency and explainability are antecedents to trustworthiness, which leaves accountability as somewhat less contestable. Some researchers (Shah, 2018) state that for machine learning models, transparency may be of limited value and not even be key to accountability. In addition, the effect of explainable AI on trust levels may also be limited depending on the social context. AI systems with high impact and operating in a highly politicized environment, even if well-explainable, are likely to generate conflict.

From a psychological perspective, we tend to trust others based on warmth (benevolence, good intentions, and interest in our well-being) and competence. This tendency carries over to experiments with virtual assistants, where we are more likely to trust a virtual assistant displaying warmth over competence in some settings. Additionally, there seems to be a gender bias: females tend to form higher trust relationships based on warmth compared to males. This suggests we might be hardwired to form trust relationships with machines that exhibit human-like behavior. This “wiring” has good support in evolutionary psychology and it can readily be seen how this protocol would increase collaboration and ultimately the survival of the species.

Addressing the Trust and Reliability Challenge in AI

Let’s start by acknowledging the limits of traditional concepts. Philosophically, we need to distinguish between trust in human relationships and reliance on AI systems. However, AI and ML models, by their nature, cannot be entirely reliable. The reliability we need to focus on isn’t about the AI model’s inherent reliability — because it won’t be — but rather the fact that we are increasingly going to HAVE to rely on this technology, regardless of our initial trust perception.

Given this scenario, one possibility is that we have no choice but to rely on AI tools, irrespective of our perception of their reliability or any trust relations we may form. Interestingly, empirical evidence suggests that while trust in AI is declining, adoption is increasing (a simple Google search will reveal this finding). This paradox indicates that something else is driving the adoption of AI. It seems that it’s becoming hard to operate effectively without it in today’s world. If your competitor uses AI and becomes more productive, you don’t want to fall behind. This creates a situation where social and competitive pressures compel you to use AI, regardless of your initial trust perceptions. Trust may be “earned” over time as it becomes a competitive edge, but it doesn’t seem to be the most critical factor driving AI adoption.

Take, for example, any other technology such as electricity or even computing power. At the beginning, electricity supply was unstable and not available to all. However, over time, due to market forces, the provision of a stable supply of electricity at cheaper prices became a differentiating factor for people to choose energy providers. Same with AI, over time, the concept of reliability, etc., will become competitive markers rather than mechanisms that foster AI adoption.

Moving Forward

Understanding AI Adoption: Recognize that the increase in AI adoption is driven by necessity rather than trust. Businesses and individuals are compelled to adopt AI to remain competitive and efficient.

Enhancing Human-AI Interaction: AI systems need to exhibit warmth and competence — like a charming barista who knows your name and how you like your latte. User feedback loops? Essential. Think of it as letting your AI know when it’s being a bit too much of a robot.

Engaging the Public: Time to demystify AI for the public. Let’s educate people so they know what AI can and can’t do. Setting realistic expectations is crucial — no, your AI assistant won’t fix your love life, but it can schedule your appointments.

Ethical Considerations: Ethical considerations should be front and center. Imagine if AI were your Grandma — would it make her proud? If not, back to the drawing board. And we need rules to ensure AI behaves itself. Think of it as putting up a baby gate so the AI doesn’t wander off and start ordering pizza without permission.

Practical Steps: Start small with pilot programs. It’s like testing a new recipe before serving it at Thanksgiving dinner. Document what works (and what doesn’t) to build a case for AI’s reliability. Focus on real user needs. Design AI that actually helps, not just impresses with fancy tricks. It’s like building a Swiss Army knife for daily life — useful, reliable, and maybe a bit cool.

Conclusion

So, where do we go from here? Let’s redefine what trust and reliability mean for AI. Develop robust standards, engage with users, and prioritize ethics. Remember, AI is like that unpredictable friend — it’s fascinating, occasionally frustrating, but potentially amazing if we get the relationship right.

And who knows? Maybe one day, we’ll trust AI as much as we trust that our friend will eventually show up — just maybe not on time.

--

--

Ratiomachina
Brass For Brain

AI Philosopher. My day job is advising clients on safe responsible adoption of emerging technologies such as AI.