A Dance of Trust

A Deeper Look at AI for Legal Advice

Nicholaserlin
HCAI@AU
4 min readMar 18, 2024

--

Artificial Intelligence (AI) is not just transforming industries. It is reshaping the very fabric of our decision-making processes. In the legal arena, this transformation brings forth a plethora of questions concerning trust. How does one trust a machine with something as nuanced as legal advice? This question forms the crux of the compelling study published in ACM IUI 2023 by Patricia K. Kahr, Gerrit Rooks, Martijn C. Willemsen, and Chris C. P. Snijders.

The study’s results, premised on a meticulously designed experiment, reveal fascinating insights into how trust in AI develops over time, influenced by the AI model’s accuracy and the nature of the explanations it provides. Let’s take a detailed walkthrough of the study’s phases and findings, illustrated vividly through the researchers’ images, which shed light on the trust dynamics in human-AI collaboration.

Experimental Study Procedure with the main study task (Phase II) being repeated 20 times.

The researchers set the stage with an introductory phase (see figure above). Here, participants are welcomed and taken through the AI system and task procedure. This stage is critical for establishing initial trust, as it’s where the participants first encounter the AI and form their initial impressions, which could significantly impact the level of trust they place in the AI’s subsequent recommendations.

In the crux of the experiment, participants make initial jail time estimates, after which the AI provides its recommendation, influenced by two key factors: the model’s accuracy and the explanation type — either human-like or abstract. This phase is where the participants engage directly with the AI, and the trust-building process either solidifies or wavers.

The study meticulously records both the behavioral trust measure (Weight on Advice) and the self-reported trust measure. These dual metrics capture the complexity of trust as a multi-dimensional construct that cannot be distilled into a single number.

Upon concluding the AI interaction tasks, the study transitions into a post-experimental questionnaire and debriefing. This retrospective phase allowed participants to reflect on their experience, thus providing researchers with insights into the long-term trust implications.

The visual graphs (see figure below) portray the relationship between trust and the AI model’s accuracy over time. The rising lines for the high-accuracy model depict a steady or even growing trust, reinforcing the notion that performance is pivotal to trust in AI. Conversely, the flat or declining lines for the low-accuracy model illustrate eroding trust, underscoring the fragility of trust in the face of performance that fails to meet expectations.

The third image (see figure below) encapsulates the heart of the study’s second variable: the type of explanation. It contrasts the human-like explanations against the abstract ones in the context of a legal case involving physical abuse. The human-like explanation, which appears more nuanced and reasoned, is juxtaposed with the abstract explanation’s stark list of keywords. This visual comparison is emblematic of the study’s exploration into whether the nature of an explanation can sway trust.

The study delves deep into the dynamics of trust, demonstrating that while accuracy is a linchpin of trust, the type of explanation plays a more nuanced role. Interestingly, human-like explanations amplify trust in high-accuracy models, suggesting that when an AI’s competence is proven, the way it communicates its reasoning can further bolster trust.

As AI becomes increasingly intertwined with critical sectors like the legal system, understanding the intricate dance of trust is paramount. The study by Kahr et al. makes a significant contribution to this understanding, offering a window into how trust evolves and the factors that can either solidify or undermine it in AI-assisted legal decision-making.

With these visuals and findings, we are reminded of the delicate balance between technological sophistication and human perception — a balance that will define the future of our relationship with AI in the legal domain and beyond.

Reference

  • Kahr, P. K., Rooks, G., Willemsen, M. C., & Snijders, C. C. P. (2023). It Seems Smart but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making Task. In Proceedings of the International Conference on Intelligent User Interfaces (IUI ’23), March 27–31, 2023, Sydney, NSW, Australia. ACM, New York, NY, USA. https://doi.org/10.1145/3581641.3584058

--

--