Designing “Trustworthy” Interactions for Opening the Black Box of Autonomous Intelligent Agents

A Living Thesis Proposal

Meriç Dağlı
Designing for Trust
10 min readSep 6, 2017

--

I submitted this thesis proposal to our thesis coordinator originally in May 2017 and since that time, I’m constantly revising my research territory and questions. You will be able to find the latest revisions to my thesis proposal in the changelog section.

Thanks for stopping by!
If you are interested in contributing to my research, don’t hesitate to contact me from mericda@cmu.edu. I’m always interested in being distracted by learning about related works, research on human-machine trust. Nowadays, I’m exploring a lot on conversational interfaces, algorithmic experiences, and trust.

Changelog

v0.1 May 2017 — Submission without references

Abstract

Our lives are getting connected through technology, as we get used to living with the efficiency and ease-of-use that the artificial intelligence enables. The computerized decision-making processes powered by algorithms, computer programs for automated problem-solving tasks, are becoming even more ubiquitous in our everyday lives. As users, we often interact with these algorithms in the forms of autonomous intelligent agents such as a self-driving car, an instant-credit decision system or a connected health management system. When the amount of information that we provide to these agents and the complexity of the task they are trying to achieve increases, they transform into black boxes that we, users or even their creators don’t know how they make decisions on behalf of humans. Since we don’t know how they work, we often hesitate to trust these agents to use them. In my thesis, I am planning to explore the relationship between user’s trust and black box algorithms in the context of interaction design. By designing “trustworthy” interactions for a specific autonomous intelligent agent domain, my goal is to provide a design lens to the explanation of how algorithms works, and decide what is good for the user, through a research-through-design methodology.

Introduction

Artificial intelligence is everywhere. In the everyday life, a human may interact with many computerized artifacts that depend on a process or set of rules to perform automated problem-solving tasks such as calculations, algorithms[1]. One kind of these computerized artifacts is called autonomous intelligent agents. These agents use algorithms to perceive their environments and take actions autonomously to perform the task successfully. As they mimic their creators, these machines can understand and solve problems in a variety of scales. Their ability to solve highly complex problems that rest on various factors makes them an in-demand problem solver of modern decision-making systems. Many physical and digital experiences such as self-driving cars can be given as an example. These vehicles that can “drive” themselves by processing the data from their environmental sensors through a central computer, which runs many algorithms at once to understand what is happening in the world and decide “when to press the gas” or “step on to the brake”. Similar to cars, our many physical interactions are also becoming digitized through algorithms. As humans, we are now waiting in front of a display to learn our loan decision just for some seconds rather than waiting for a bank employee to go through our credit/payment history and make a decision for the amount of money that we are asking[2]. This convenience aimed to makes our lives faster and happier but that is not always the case.

If something goes wrong during or after our interaction with an autonomous intelligent agent, we would like to know what is wrong or why does the system thinking that way, just like a black box which we don’t know what is happening inside[3]. When we add this to the low amount of feedback that we receive from the system at the critical times, we often don’t trust these autonomous agents or lose our trust in them. Whether we are pilots, who rely the autopilot system in our aircraft, a white collar worker, who is having problem with “the system”, or an online social network user, who is seeing his personal data, which he put into the network, elsewhere, our relationship with these agents highly depends and get affected by the level of trust that we put in them. In other words, before sharing our personal (private) data or trusting them with “our life, we seek to learn about how these intelligent agents work[4].

Significance

In this research, I intend to study the inter-relationship between trust, artificial intelligence and user experience by several speculative interaction design concepts through a research-through-design methodology. In these design concepts, I will examine a trust spectrum between AI-powered agents and the user through terms such as “trust”, “mistrust”, “distrust”, and “trusting too much”. Although the current definition of “intelligent agent” can refer a large number of things such as autonomous cars, personal assistants, suggestive parts of a system such as a recommendation algorithm of a search engine, I will clarify and scope down this umbrella term into the focus areas in the early stages of the literature review.

Initiatives to underlay the opening of the black box algorithms are gaining more and more importance. As fields and techniques such as machine learning and artificial neural network rise, researchers who are working on the social side of AI are arriving at a consensus on deciphering how these artificial minds work but it is not an easy task[5]. Despite on-going research on how to make these techniques more easily understandable, as understanding how a human mind works, there hasn’t been a definite answer to solve the black box problem of the autonomous intelligent agents, yet.

By investigating the relationship of user’s trust with the interaction and user experience design of the autonomous intelligent agents, this research will provide a fresh lens to this complex problem by rethinking the touch points and interaction modalities of the autonomous intelligent agents from the perspective of user’s experience with the agent. The existing literature that investigates trust and user experience design in the domain of visual or editorial design of web applications or trust and user experience design in the domain of autonomous transportation systems offers an explanation for why gaining user’s trust is important for technology/product adoption but not specifically focused on how trust can be used as a facilitator to explain how a black box algorithm works and how trust can be used to clarify/show how an autonomous intelligent agent makes decisions.

Besides its research significance, the results of this research will also help design practitioners to design trustworthy interactions in their algorithm-driven products. In business sector, where sometimes profit depends on the active user base of a (complex) system, creating trustworthy interactions may increase the adoption rate of a system that they are designing.

Goals

In a personal level, this research helps me to continue my ongoing research on designing for trust domain. In my prior evaluative research on how trust can be a facilitator for smart product’s adoption, I have tried to map out the relationship between a smart product’s “smart” dimensions/features and user’s trust as well as how trust is related to other technology acceptance/adoption metrics such as usefulness, ease of use. More project details will be available once the research has published since it is protected by a non-disclosure agreement now. In general, this thesis research will enable me to extend my knowledge about the field of trust in the context of designing for interactions for digital experiences.

The goal of this research will be to explore the phenomena of trust in the field of interaction design to provide design principles for designing for trustworthy interactions in the context of autonomous intelligent agents. By following a research-through-design methodology, my aim is to develop several design experiments to measure the critical trust dimensions in certain everyday life scenarios, which are affected by intelligent agents, mostly. These everyday scenarios, as well as the specific kinds of intelligent agents, will be clarified before starting to the literature review. Answering the following questions will guide my scope down the process to find specific design case-studies or opportunity areas.

Pre-Study Questions:

· What are some of the most critical areas that an autonomous intelligent agent interacts with an user in future?

· What are some of the most ubiquitous intelligent agents that are being used by, which often create frustration at the end of the experience?

By exploring the trust between users and autonomous intelligent agents, I aim to scope down my case-areas by answering the pre-study questions and generate a discussion by answering the following primary and secondary questions:

Main Research Question: How does the information quality affect the perceived trust of an user to a computer system in the context of autonomous intelligent agents?

Secondary Research Questions

· What is a trustworthy interaction in the context of autonomous intelligent agents?

· What are some of the frustrations or concerns that make potential users not to trust an intelligent agent?

· What is the relationship between perceived intelligence, the level of feedback at interface level and the trust of the user in an agent?

· How do you design a trustworthy interaction in the context of autonomous intelligent agents?

· Which dimensions of an interaction do affect the perceived trust level of the user in an autonomous intelligent agent?

· How does trust effect the adoption of emerging technologies that are relatable with autonomous intelligent agents?

The outcome from the discussion of the research will be design guidelines on how to design for trustworthy interactions for the better user experience of autonomous intelligent agents in specific business markets. This will enable practitioners to benefit this research and provide insights for them to how to design for better user experiences through the user’s’ perceived trust.

Scope & Limits

Trust as a phenomenon is highly case-dependent and context-dependent. Therefore, I will not be able to generalize my findings on all autonomous intelligent agents in a form of framework or model on trust. Instead, I will explore what trust mean for the adoption of three specific autonomous intelligent agent scenarios and prototypes. Alternatively, I may also tackle the issue with different modes of a specific agent scenario, but it is important explore the similarities between several case studies to observe the trust/agent domain relationship. To foster conversations about the possible futures with trustworthy interactions, I will use interaction design. Although I will mainly use them as stimulants, they do not have to be necessarily finished, ready-to-be-implemented designs. I will decide on the final quality of these design artifacts such as Lo-Fi, Hi-Fi or “‘Wizard of Oz’ type of video prototypes” after the exploratory research phase. If my findings led me to a final combined prototype of previous three experiments, this study can also include a final interaction prototype.

Resources and Feasibility

Successful completion of this study will depend on various variables. For the exploratory research part, facilitating an extensive secondary research and supporting it with a number of in-depth interviews with design practitioners will be crucial. For the concept/artifact design steps, successful implementation of design artifacts and recruiting an acceptable sample for evaluating the artifacts will be decisive. All design artifacts will be developed with personal funds of the author and other additional small research funds through CMU resources. The proposed in-depth expert interviews will be carried out via online conference calls whenever possible.

Relevant Work

Trust has been studied in the fields of Autonomous Systems Research, Human-Robot Interaction, and Human-Computer Interaction as well as a visual design quality with the credibility-centered (design) research before. Trust has also been a phenomenon that researched by many other disciplines such as sociology, behavioral sciences*. In addition, trust has been gathering attention by the interaction and user experience designers for last three years due to the increasing applications of emerging technologies. By the recent increasing interest in the sub-fields of artificial intelligence such as machine learning, the black box algorithms are also being questioned about their possible future as well as what can be done to break the black box to foster the adoption of these algorithm-driven agents. To cover all these points, existing trust definitions and related concepts such as transparency, familiarity or reliance will be examined in relation to the autonomous intelligent agents.

Approach and Method

This research will consist of five rounds and follow the “research through design” method. (1) The research will start with an exploratory research, which consists of a literature review as well as in-depth interviews with practitioner designers of these complex systems. By using exploratory research findings, (2) (3) (4) three interaction design scenarios/artifacts will be developed. Although the fidelity of these scenarios hasn’t been clear yet, the idea is to designing small experimental artifacts to test quickly with a number of participants to test the proposed hypotheses. These artifacts will mostly base on digital experiences in one or more of these domains: autonomous transportation, conversational assistants, financial technology products or a social media website. For example, in a very traditional user experience sense, even testing the relationship between trust and certain dimensions such as familiarity through different interface modalities such as a mobile app can be possible. On the other side, I may also prototype a “deceiving” algorithm, which intentionally designed to show users’ how easy they trust a computer program with the help of the courses on machine learning and programming that I am planning to take. At the end of each artifact development session, both quantitative and qualitative evaluation methods will be used to get participants opinions through artifact testing workshops for validation. (5) At the final stage, all findings will be analyzed and documented in a thesis book as well as an artifact exhibition.

Thanks for stopping by!
If you are interested in contributing to my research, don’t hesitate to contact me from mericda@cmu.edu. I’m always interested in being distracted by learning about related works, research on human-machine trust, this time specifically conversational interfaces, algorithmic experiences, and trust.

Bibliography

[1] Oxford Living Dictionary, “Algorithm — Definition of Algorithm in English | Oxford Dictionaries,” accessed April 25, 2017, https://en.oxforddictionaries.com/definition/algorithm.

[2] Frank Pasquale, Black Box Society : The Secret Algorithms That Control Money and Information, n.d.

[3] Bennie Mols, “In Black Box Algorithms We Trust (or Do We?) | News | Communications of the ACM,” 2017, https://cacm.acm.org/news/214618-in-black-box-algorithms-we-trust-or-do-we/fulltext.

[4] Aaron Springer, Victoria Hollis, and Steve Whittaker, “Dice in the Black Box : User Experiences with an Inscrutable Algorithm,” 2017, 427–30.

[5] Will Knight, “The Dark Secret at the Heart of AI — MIT Technology Review,” accessed April 15, 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

--

--