Can trust in AI make us poorer?

Ratiomachina
Brass For Brain
Published in
7 min readMay 2, 2024

This is an experimental argument that systematically builds from defining interpersonal and technological trust, through the increasing integration of AI, to the consequences of this integration on trust dynamics and economic implications. It logically links the philosophical concepts of trust and the practical effects of technology on human relationships and economic structures, providing an interesting argument about the profound effects of AI on society.

Introduction

Trust has long been a central topic in philosophy due to its indispensable role in nearly every form of coordinated human activity — from politics and business to sports and scientific research. It is essential not only for the successful dissemination of knowledge but also for practical deliberation and planning that rely on information beyond our individual capacity to gather and verify. In essence, without trust, achieving our goals would be nearly impossible, and our knowledge would be severely limited. Yet, despite its crucial importance, there remains significant philosophical debate regarding the nature of trust, how it should be normatively constrained, and how it should be conceptualized in relation to other valued aspects of human life.

In the context of AI, discussions about trust often miss these nuances. For instance, a recent article in the Harvard Business Review argues that increasing trust in AI is necessary to further its adoption (see HBR’s AI’s Trust Problem). Such narratives frequently conflate trust with simpler notions of reliance or even risk management, potentially leading to misconceptions about the role of trust in technology adoption.

This confusion underscores a fundamental issue: the nature of trust itself, particularly whether technology can be trusted in the same way we trust other humans. Without a clear definition and understanding of these concepts, calls to “increase trust in AI,” “foster trustworthy AI,” or “build trust in AI” are likely to be ineffectual. For meaningful progress in AI adoption, it is critical to delineate and understand the distinct dynamics of trusting technology compared to interpersonal trust.

Argument

Premise 1 (P1): Trust is a hybrid of attitudes, such as optimism, hope, and belief, directed towards a trustee. This attitude involves a non-negligible vulnerability to being betrayed on the truster’s side. In the paradigmatic case of interpersonal trust, it entails optimism that the trustee will handle entrusted matters responsibly and effectively.

Premise 2 (P2): Empirical research has established a causal relationship between high levels of interpersonal trust and economic growth, suggesting that trust enhances social cohesion and facilitates economic transactions and cooperative behaviors. (For an overview of the evidence see Algan, Y., & Cahuc, P. (2013). Trust and growth. Annu. Rev. Econ., 5(1), 521–549.)

Premise 3 (P3): AI adoption is increasing across various domains of human activity, a trend that will continue as general trust in AI technology grows. (https://www.forbes.com/advisor/in/business/ai-statistics/)

An interesting reforming of trust is from C. Thi Nguyen (https://academic.oup.com/book/44919/chapter-abstract/384783085?redirectedFrom=fulltext). What it is to trust, as he explained, is to settle one’s mind about something, to stop questioning it. To trust is to rely on a resource while suspending deliberation over its reliability. Trust sets up open pipelines between yourself and parts of the external world. Trust permits external resources to have a similar relationship to one as one’s internal cognitive faculties.

Premise 4 (P4): Trust in AI, as described by Nguyen, is characterized as an unquestioning attitude towards the reliability and functionality of technology, which does not engage with moral or emotional dimensions but is based solely on performance and efficacy. To trust, in short , is to adopt an unquestioning attitude. (https://academic.oup.com/book/44919/chapter-abstract/384783085?redirectedFrom=fulltext)

Premise 5 (P5): Philosophers like Heidegger have described technology that becomes seamlessly integrated into daily life as being “ready-to-hand,” meaning it becomes an extension of our being and is used without conscious scrutiny — similar to Nguyen’s concept of trust in technology. (https://www.futurelearn.com/info/courses/philosophy-of-technology/0/steps/26314) and https://medium.com/brass-for-brain/my-pacemaker-betrayed-me-4f4b41c5e384.

Premise 6 (P6): As AI and robotics become more deeply integrated into human faculties and daily activities, they transition from being mere tools to becoming essential extensions of our personal and professional selves. This integration significantly influences human behavior and decision-making, prompting a societal shift in trust dynamics from traditional interpersonal trust to a form reliant on the functionality and reliability of technology, akin to Nguyen’s concept of an unquestioning attitude.

Classic example of this is Neuralink, set up with the intention of building an interface that allows humans to issue commands directly from the brain.

According to Nguyen, trust can take various forms, including an unquestioning attitude and agency-oriented trust, which often interact but can also exist independently. For instance, you might trust someone unquestioningly because you believe they have goodwill towards you, demonstrating a combination of both forms. However, trust can also be one-dimensional, such as trusting the ground solely in an unquestioning way, without attributing any agency. These diverse forms of trust are grouped together because they all serve to extend our agency by incorporating elements of the external world into our lives. Both responsive cooperation and an unquestioning attitude are crucial mechanisms for this expansion and integration of agency.

This evolving relationship between humans and technology, where tools once external now form part of our identity and agency, brings complexities in discerning the origins of actions and decisions.

Premise 7 (P7): This shift leads to blurred boundaries between purely human agency and technology-assisted actions, creating uncertainty about the authenticity of intentions, emotions, and decisions. As technology becomes an integral part of how we express and conduct ourselves, it becomes increasingly challenging to determine whether actions are genuinely autonomous or significantly shaped by AI influences, which may obscure true human intentions and feelings.

As people increasingly use digital technologies to enhance and streamline their lives, they are also gradually delegating more decision-making and personal autonomy to these tools. Digital technology will become more embedded in our decision-making processes, offering vast amounts of information that help individuals explore options and access expertise as they make their way through the world.

Humans value convenience and will continue to allow black-box systems to make decisions for them.

AI technology’s scope, complexity, cost and rapid evolution are just too confusing and overwhelming to enable users to assert agency

Premise 8 (P8): As AI technologies extend our agency, becoming deeply embedded in our decision-making processes and social interactions, there is a significant shift in the form of trust. This transformation leads individuals to rely increasingly on an unquestioning attitude towards both AI and, by extension, other people — due to the pervasive influence of AI in shaping human actions. Consequently, the traditional, richly normative interpersonal trust that depends on discerning and valuing human intentions, emotions, and capabilities is eroded. In its place, a form of trust emerges that is less concerned with the authenticity of agency and more focused on the reliability of outcomes, whether they are mediated by humans or technology.

Premise 9 (P9): The erosion of interpersonal trust, precipitated by growing uncertainties about human authenticity in a technology-driven society, undermines social cohesion and the cooperative behaviors that are foundational to economic prosperity. As individuals become less certain about the origins and authenticity of actions — questioning whether they are genuinely human or significantly influenced by AI — this lack of trust can stifle collaboration, innovation, and the free exchange of ideas, all of which are crucial for a dynamic and robust economy.

Conclusion : Given the established relationship between interpersonal trust and economic performance, a significant societal shift from interpersonal trust towards a predominantly technological trust, as outlined in Nguyen’s concept, could potentially undermine the social cohesion and cooperative behaviors that underpin economic prosperity. This transition might result in adverse impacts on GDP per capita. However, the actual extent of these impacts could vary depending on how deeply and widely these technologies are integrated into daily human activities and how societal norms adapt. Consequently, it is crucial to develop and implement strategies that maintain or even enhance trust dynamics to safeguard against potential economic downturns associated with these shifts in trust.

Balancing the Impacts

While the shift from interpersonal trust towards a predominantly technological trust, as discussed in Nguyen’s framework, poses risks to social cohesion and traditional economic structures, it is also crucial to recognize the substantial benefits AI integration can bring. Increased productivity, driven by AI, has the potential to significantly enhance GDP per capita by streamlining processes, reducing costs, and enabling innovation. Thus, society faces a critical balancing act:

Managing the erosion of interpersonal trust caused by deeper AI integration while leveraging the economic gains it offers. The challenge lies in maximizing these benefits while mitigating the risks to ensure that AI contributes positively to economic prosperity without undermining the foundational trust dynamics that sustain long-term economic and social stability.

Further Investigation and Consideration

This balance is not purely theoretical but requires empirical investigation to determine:

  • The extent of productivity gains from AI in various sectors and their direct impact on economic metrics .
  • The degree of trust (traditional, thick, normatively laden) erosion and its measurable impact on social cohesion and economic transactions.
  • Comparative analysis of these factors in different economic contexts, sectors, and societies to ascertain overall trends and outcomes.

--

--

Ratiomachina
Brass For Brain

AI Philosopher. My day job is advising clients on safe responsible adoption of emerging technologies such as AI.