Navigating with ConfidenceA Framework for Entrusting AI

LaSalle Browne
13 min readDec 21, 2023

--

“Trust is the foundation of all relationships, both personal and professional. But trust in technology is different than trust in people.” — Don Peppers

Trust Compass in AI, created by author in Midjourney

Short on Time, here are the High-Level Key Takeaways

  1. Emerging Challenges with AI Trust:
  • The increasing integration of AI in daily life raises critical questions about trust in AI and its creators. The ability to trust AI is essential for its adoption and beneficial use.
  1. Trust as a Foundational Element:
  • Trust is not just a philosophical idea but a core element underpinning society, economy, and technology. It’s crucial for reducing uncertainty and enabling successful interactions, both with humans and machines.
  1. The Concept of ‘Trust Tax’:
  • In systems where trust is lacking, there is a ‘trust tax’ — a metaphorical cost reflected in increased regulation, fear, resistance to adoption, and reduced innovation. This is evident in sectors like finance and nuclear energy.
  1. Trust’s Unique Role in AI:
  • AI’s deep integration in society makes trust in AI different from other technologies. Trust in AI is akin to a guiding compass, directing its architectural, policy, and developmental paths.
  1. Frameworks for Trust in AI:
  • The frameworks by Keng Siau, Weiyu Wang, and Woudenberg et al. provide a basis for understanding trust in AI, focusing on foundational and continuous aspects of trust and tackling autonomy, trust, and environmental complexity.
  1. Pillars of Trust in Human-AI Interactions:
  • Trust in AI is built on competence, authenticity, reliability, empathy, performance, process, predictability, and purpose. These factors are crucial for integrating AI into our lives.
  1. Moving from Trusting to Entrusting AI:
  • The focus shifts from merely trusting AI to entrusting AI, emphasizing proactive participation in AI’s ethical development and operation. This entrustment involves actively shaping AI’s role and ensuring it aligns with societal needs.
  1. The Importance of Entrustment in AI Development:
  • Entrusting AI goes beyond mere adoption; it involves taking an active role in shaping AI’s future. This approach helps avoid complexities and costs in AI systems lacking in trust.
  1. Potential Risks of Inadequate Trust in AI:
  • Lack of trust in AI can lead to public backlash, underutilization, regulatory overreach, talent loss, reduced investment, ethical failures, geopolitical tensions, and competitive disadvantages.

10. Future Focus on Trust in AI:

  • The significance of entrusting AI in shaping a future where AI is a proactive partner rather than a passive tool, highlighting the need for a deep, measurable level of trust and engagement in AI’s evolution.

· Upcoming work will further explore entrusting AI in more detail, focusing on directionality, optionality, and developing frameworks to quantify and measure trust.

As artificial intelligence (AI) systems grow more advanced and ubiquitous, embedded in many parts of our everyday life, a trend which is only expected to intensify, a critical set of challenges emerges — Should humans trust AI? Should we trust the builders of AI? What are the implications for humanity if we can’t trust AI or the builders of AI? How, if we are to trust AI, can humans learn to trust AI? These might be some of the key questions and challenges of our age, without the ability to establish trust in AI, people will remain unlikely to adopt and benefit from AI technologies, no matter how capable or valuable to humanity at large they become. Trust, however, or distrust is a potential stumbling block in the adoption, usage, progress, and de-risking of AI systems. Establishing trust becomes a key factor in overcoming the substantial uncertainty, ambiguity, and risk which pervades the development and deployment of AI. The adoption and advancement of AI hinges on establishing trust amidst the inherent uncertainties and risks in its development and deployment. This essay argues for a reframing around the role of trust in AI, emphasizing its role as a catalyst in overcoming challenges and unlocking AI’s potential.

The Essence of Trust in AI

In the evolving landscape of AI, humans oscillate between apprehension of the unfamiliar, fear of the unknown and a vision of an enhanced future. This dichotomy points towards a singular bridge — trust. Trust isn’t merely a philosophical notion. It sits at the very heart and foundation of our society, economy, and technology. Trust is a way to lessen uncertainty while increasing the chance of successful interactions with others, be they human or machine. It is an act of vulnerability. The payoff when we trust someone or something is we use mental, physical, financial, and time resources. Trust is a foundational concept. It forms the bedrock of many of our societal, economic, scientific, religious, and technological structures. Trust is incredibly important, especially in finance. It’s not just about numbers and transactions; trust is what really matters. You can think of trust as the currency of systems. Its absence creates a ‘trust tax’, which is the extra cost of operating without trust. This ‘trust tax’ is a hidden tax as it often takes shape in the form of increased regulatory burden, overhead (legal, environmental, etc), high fear factor, resistance to adoption, and a marked decrease in innovation. Prime examples of this are the financial sector and the nuclear industry, where trust acts as a linchpin. In finance for example, trust plays a wide ranging role from markets and banking to investments and monetary policies. The Global Financial Crisis (GFC) showed that without trust, regulation and safeguards are ineffective. In fact, Regulatory institutions’ primary mission is to maintain system trust and to reestablish trust if it is broken or breached.

As AI is integrated and embedded everywhere in society its role becomes even more critical. Co-creation, co-evolution will continue to deeply embed AI in human life. This deep integration will make our relationship to and with AI different than any other technology. Trust is a powerful force in society. It connects individuals, promotes shared goals, and influences decisions. Trust also defines roles and responsibilities. It also functions as a compass, directing actions and pathways, acting like a super GPS. Imagine it as the compass which guides AI’s direction in terms of architecture, policy, and system development, ensuring agility, flexibility, and minimizing bottlenecks and friction. To better understand the role trust plays in AI, it helps to have a framework, like the one laid out in “Trust in Machine Learning and Robotics” by Keng Siau and Weiyu Wang (link here). They breakdown trust’s initial phase when trust is established and the second or continuous phase, the process by which trust is maintained in technological contexts, particularly AI. Siau and Wang offer further insights into trust as it applies to technology. They argue trust in technology requires us to consider some addition factors: human attributes — personality and capability, environmental elements — culture and institutional factors, technological aspects — performance, process, purpose, and I would add crucially, the trustworthiness of the technology’s provider. See figure below:

Trust Factors in Technology, by Siau & Wang

Navigating towards a more complete framework

To create a more comprehensive framework, we augment the one by Siau and Wang with the framework of Woudenberg, Deiches, Harasimowicz, and Shideler, Frameworks for Integrated Design of Entrusted Systems,” (See, here). In the paper their proposed framework directly tackles autonomy, trust, environmental complexity to trust risk and gives system architects and developers a concrete way to design trust into AI systems. The holistic system-based approach to building advocated by Woudenberg, et al, is expanded in more detail in the book “Rebooting AI” by Gary Marcus & Ernie Davis. It links the development of trustworthy AI with addressing its challenges. These challenges include transparency, safety, explain ability, ethics, and building via a systematic trust focused process & design. Without implementing such a process, trust will indeed become a stumbling block in the progress of AI. I would add, the real-world risk risks are real and tangible. One only need look at the cases of Tay, Uber, Cruise, Blender, and mental health chatbot Tessa. Trust is key in ensuring the acceptance and continuing progress and development of artificial intelligence. By combining the two trust frameworks, we get:

New Combined Trust Framework

The Role of Trust in AI Development

A Trust network as envisioned by AI, created by author in Midjourney

To understand trust better and the role it will play in AI development, let us dissect its components. In human interactions, trust is built on four key pillars: competence (the ability to do something successfully), authenticity (being genuine and true), reliability (consistency in performance), and empathy (understanding and sharing another’s feelings). In AI systems, trust revolves around performance (how well it functions), process (its method of operation), predictability (its consistency and reliability), and purpose (the intention behind its creation and use). AI is a technology that bridges the familiar (human-to-human and human-to-machine interactions) at a higher level.

According to researchers, developers creating AI systems for human-to-AI interactions, should consider two additional key factors: authenticity/honesty and empathy. Imagine the following scenario: You are interacting with your personal AI health bot — Doc. This bot plots your biometric trends and makes healthy lifestyle recommendations. But today, there is some anomalous data that you can’t make sense of, so you ask Doc what you should do about it. Should the bot lie (i.e., hallucinate a plausible answer)? Or should it tell you that it doesn’t know, but that it can help you ask questions to uncover a possible reason and solution? Or it doesn’t know, but it can connect you to a professional or resource which does know. The latter option would engender more trust, as it is an authentic and honest response. The bot could then proceed to express empathy (see article on embedding empathy in AI: here) if you are suffering from pain, mental distress, etc. This kind of authenticity not only builds initial trust but ensures its longevity. In AI design, trust components like honesty and empathy are important. They are crucial as AI becomes more integrated in our lives. How can we use the dual trust framework to help creators and developers integrate trust into their AI systems & tools?

Moving from trusting AI to entrusting AI

The research work of Keng Siau and Weiyu Wang in “Trust in Machine Learning and Robotics” & Woudenberg et al in “Frameworks for Integrated Design of Entrusted Systems” provides a foundation for the framework on trust in AI. In their work, Siau & Wang breakdown trust in technology into two distinct phases: the foundational phase and the continuous phase. The foundational phase refers to the initial establishment of trust, which includes aspects like representation, perception, and transparency. For example, when first introduced to an AI-driven assistant, a user might evaluate its trustworthiness based on its design, its responsiveness, and how transparently it processes requests. The continuous phase pertains to the maintenance and nurturing of trust over time, emphasizing factors like reliability, security, and privacy. A user’s trust in a navigation app, for instance, would depend on its consistent ability to provide accurate routes and protect user data over time. Woudenberg et al, provide a way to capture environmental complexity and risk via system independence and system intelligence. They also introduce the concept of entrusting AI systems as the result of deliberate design and engineering decisions using the framework as a blueprint. Designers/creators can use this blueprint to reconcile any misalignments or risks between the environmental complexity (defined by authors, “what is required”) and trust (defined as, “what is allowed”). Creating a clear design space, as show below:

Establishing an Entrusted Design Space (Source: Lockheed Martin)

To capture some of these real-world dynamics, we can further divide trust and safety issues into two categories: foundation issues and management issues. Foundation issues deal with the scope, speed, alignment, shape, and direction of change or the direction of progression. Management issues deal with control, power dynamics, cultural dynamics, and large scale societal and cultural change. How these dynamics interact can play a critical role in trust. For instance, failure or breakdown in AI systems is often due to not addressing “foundation” and “management” issues early enough in the development process. These failures can compound over time, leading to volatile reactions, such as overreacting or underreacting. This can break trust, which can lead to structural and systemic failures, as well as unintended consequences. These unintended consequences are often referred to as X-factor risks or Y-factor risks, such as bias and inequity.

Trust is key in the networked and AI age. It adds a whole new dimension to trust. It defines connection strength, interaction space, scale, and positioning. How can we account for these unique aspects? I am proposing the EnTrust scoring system. It’s a trust scoring system which seeks to quantify to a degree the different levels of trust in a system. The scoring system will help organizations assess trust levels and find areas to improve. The EnTrust scoring system can be incorporated into the new entrust framework (see, here). This can guide creators, developers, policymakers, and others in focusing on trust. It is a tool organizations can use to make navigation from the inception through the management phases a seamless process.

The link between Trust and AI Safety

The absence of trust in AI systems causes misalignment and corruption. It weakens connections, destroys communication, and increases risk and costs. Additionally, it leads to more centralization. This causes lower perceived value and a cascade of safety issues. Failure to build trusted AI systems from the start is going to increase the AI safety burden and risk by orders of magnitude. Building trust in mind from the start helps avoid safety issues and threats surrounding AI. This allows developers/creators to focus on addressing systemic risks with the right technical solution. It also prevents an increase in complexity or abstraction from reality. (Refer to Carlos Perez’s article on AI safety & leaky abstractions, here).

Algorithmic supersystems (AI), trust, and safety are linked by another key factor. We understand that we have dealt with complex superhuman algorithms. We have experience with powerful algorithmic systems and structures. The precedent is culture. Religions, nations, and corporations are complex cultural systems. They develop their own operating codex, onboarding, conversion/sales, and goal mechanisms. As we are seeking to build similar systems for AI, we should draw from this experience. It is important to remember one key warning: these algorithmic superorganisms often begin with good intentions, ideas, and moral motives, driven by real injustices, inequities, biases, and problems. Over time, they can change due to a misalignment between their values and their operating processes. This can cause them to spiral out of control, leading to actions such as war, the Spanish inquisition, or even bombing unauthorized datacenters, as suggested by some AI safety advocates. Humans can apply the wisdom from their rich history to AI & intelligent systems. They have dealt with complex algorithmic superorganisms before. Humanity should be cautious when building AI systems to ensure trust and safety. The systems and processes should not be overly complex or burdensome. Nor should they necessitate greater control and/or centralization of power. The key is to find the right balance. We can do this by establishing trust and safety early on. By doing so, we can prevent many future problems.

Failures in Trust

Here are some potential consequences if trust is not prioritized in AI development:

Public backlash — Lack of trust could lead to public resistance and rejection of AI, setting the field back. People may protest or lobby for restrictive policies if they become fearful.

Underutilization — Distrust means society does not take advantage of beneficial AI applications. People avoid AI systems even when they could help solve problems. Valuable innovations go unused.

Regulation overreach — Governments often respond to public pressure. Without trust, regulators may implement excessive constraints on AI development that limit progress and applications.

Talent loss — Experts may leave the field due to perceived reputational risks of working on untrusted AI systems. Key talent could migrate to other technology areas.

Lower investment — Investors and companies may see AI as too risky and uncertain without strong public trust. Less funding slows research and product deployment.

Ethics failures — A lack of trust could signal that companies are not prioritizing ethics and safety. This results in unethical or dangerous AI applications.

Intergovernmental tension — Distrust in another nation’s AI systems could exacerbate geopolitical tensions. International collaboration suffers as a result.

Competitive disadvantage — Regions that develop AI without building trust would lose ground economically as talent and investment flock to more trusted ecosystems.

Cultivating trust and transparency early in AI development helps avoid these pitfalls. Being proactive about ethics, safety and governance prevents loss of public faith down the line. Prioritizing trust accelerates our path to AI as partner versus AI as enemy.

Conclusion

In summation, trust can generally be regarded as a mechanism for reducing uncertainty while increasing the likelihood of a successful interaction with others in the environment. When we trust someone, we expend less cognitive, physiological, and economic resources dealing with this entity. Trust has been evolutionarily beneficial for humans (see books: Secret of Our Success & The Social Leap), a prerequisite for any social interaction (The Social Leap & Trust-Social Virtues & the Creation of Prosperity), the “currency” of markets & legal systems, and a defining part of the social/cultural fabric which form human societies. Utilizing the Entrust framework when developing AI will help yield successful interactions with AI, maximizing benefits while avoiding pitfalls and catastrophes. By entrusting AI, we become pro-active participants in shaping our future with AI and intelligent systems, rather than reactive or passive participants.

The primary objective in the evolution of AI should be to foster a sense of entrustment, elevating our relationship with AI beyond mere trust. This shift towards entrustment means actively engaging in AI’s development, ensuring it meets the highest standards of capability, safety, privacy, security, and accessibility. Embracing an entrustment-focused mindset prepares us to pre-empt and resolve trust-related challenges that inherently add unnecessary complexity and cost to AI systems.

My goal with this essay has been to reshape our understanding of AI from a perspective of trust to one of entrustment. Entrusting AI implies that we, as a society, are not only passive recipients of AI technology but active shapers of its ethical framework, operational, and risk boundaries. By entrusting AI, we decided the direction AI will move in and the hurdles which must be overcome before we will entrust AI to handle key areas of human life. The forthcoming discussions in this series will delve further into the nuances of entrusting AI, examining its direction, flexibility, and the frameworks necessary for measuring and quantifying this deeper level of trust.

--

--

LaSalle Browne

Quantum thinker, entrepreneur, explorer, ever curious, always learning, traveler, lover of life. Opinions are my own and may change w/o notice