Why It Is Important to Design Artificial Intelligence (AI) Systems with Human Agency in Mind

Loopio
Loopio Tech
Published in
8 min readJan 4, 2023
Photo by Alex Knight on Unsplash

One of the community-building activities we have at Loopio is called the Artificial Intelligence (AI) Guild. This is a quarterly meetup where Loopio employees (we refer to ourselves as Loopers) come together to engage in energetic conversations on AI. Considering the fact that AI is a hot topic in tech, the goal is to create space for enthusiasts to discuss and debate topics like:

  • The evolution of AI
  • Its underlying theories, algorithms, metrics, and best practices
  • Recent advances in the space
  • Ethical considerations when building and implementing AI
  • Societal consequences

The AI Guild recently hosted a conversation between Andrea Ong, Director of Product Design, and Suneil Shrivastav, Data Science Team Lead, with active attendee participation. The discussion centered on Designing AI-Powered Business-to-Business (B2B) Products with Humans in Mind.

Andrea came to Product Design through a business analyst background and as such, her design practice is biased towards systems thinking. Suneil’s first introduction to data through Excel sheets and pivot tables evolved into Data Science mastery by way of strategy and business consulting, business intelligence, and analytics. In this blog post, we distill some of the discussion points from this meetup.

What Exactly is Artificial Intelligence (AI)?

AI Encompasses Machine Learning (ML) and Deep Learning (DL)

The theories that power recent AI advances have actually been around for a long time. These were previously shelved due to insufficient data and the lack of computing power. With the internet boom, there came an explosion of content, at the same time that the cost of computational power was being reduced. This gave rise to the democratization of AI, and algorithms gained life.

The Wikipedia page on Deep Learning has a useful construct that can be used to describe AI.

  • AI can be defined as the ability of computers to mimic the behaviour and intelligence patterns of humans
  • ML is the subset of AI practice that looks at giant datasets and tries to discern patterns within that data. It tries to generalize from the past into the future without using a complex set of rules. This approach is mainly based on training a model on datasets
  • Within the inner circle lies Deep Learning (DL) — the use of neural networks as the specific model type being implemented. The ‘deep’ of DL refers to the number of layers within the architecture of the neural network

While there are distinctions and nuances, the overarching capability of ML and AI is to be very good at mimicry and pattern-matching.

Deep Learning and Machine Learning as subsets of AI (from the Wikipedia entry on Deep Learning)

What is a Good Way of Contextualizing Design Work for AI?

Framing AI as Agentive vs Assistive Helps Contextualize Design Work

From a design perspective, it is helpful to think of AI as a spectrum, going from narrow or weak AI to artificial general intelligence (AGI) or strong AI, to artificial super intelligence. These can be defined as:

  • Narrow / weak AI: Help me do stuff that I tell you to do
  • AGI / strong AI: A state of AI that mimics human behavior, that uses historical datasets to inform its next actions or interactions, that learns from doing
  • Artificial super intelligence: The AI is smarter than you

This framework is helpful for designers because it helps designers think about things around constructs of assistants vs. agents.

Assistants help us do things that we tell them to do. Agents make decisions and do it themselves.

  • A vacuum cleaner is an assistant and helps me clean as I wield it around my living room
  • A Roomba is an agent that I ‘hire’ to clean for me — it learns the layout of the room and decides for itself on the best way to clean the space

Assistants sometimes fail while agents sometimes go rogue!

Agency is very important because, at the end of the day, remember that we are using technology in the service of humans instead of using technology just for the sake of technology.

Considering this, we then dove into the central question of our discussion:

Should Humans Always Have the Ability to Regain Control from AI?

Sometimes YES, sometimes NO: The Answer is Situation-Dependent

We considered areas where we might be willing to cede control. Perhaps when there is an element of decision fatigue fuelled by choice, we might be OK with AI mechanisms making decisions on our behalf. That’s when we might want to give more control to assistive or agentive technologies. Perhaps it’s only when the application space relates to critical and impactful decisions would we want to retain control. As an example, while I am happy to let the robot vacuum learn about the limits of my space and vacuum in whatever sequence it wishes, I would like to be able to adjust the distribution of my investments even if AI is able to theoretically make ‘better’ decisions due to the amount of data available to the underlying ML model.

However, the impact and importance of the level of autonomy is not always obvious right away. We would not have thought that curating and recommending personalized new stories or videos was very harmful, but this is what happens through social media and creates social bubbles that make it harder to communicate across communities. Additionally, the intention of most of the AI designs we are exposed to is driven by commercial instead of altruistic motivations. Video-feed algorithms might seem harmless but the intention is to hook you into watching more.

YES, Humans Should Always Retain Autonomy and Control

We ultimately have a sense of self-importance, that we as humans do know best. As the humans whose lives are being impacted by these AI systems, we want to be given some way to override these AI decisions if it does not actually facilitate the activity we’re engaged in. We additionally want there to be mechanisms of contestability and redress to challenge AI’s decisions, for example when related to healthcare chatbots or credit applications.

Human Agency is Not A Uniform Trait

Let’s consider though, what human agency is and what it means to have agency. If we decide that we are autonomous beings to whom things don’t happen without our consent, then we can decide how much control to cede to algorithms.

However, human agency differs from person to person. We need to also think about our own design flaws as human beings — our own implicit biases in doing design in the first place, since we do all come with different experiences.

Consider the question: is there a difference between librarians and their authority to curate for us in a public library vs an online bookstore and their authority to curate personalized recommendations? Perhaps there it’s the difference in the physical vs digital space, where one can bump into new stuff at the library. However digital browsing still allows random encounters so maybe it’s not too different.

We tend to think of human beings as benign creatures, but a look at the choices we make as people demonstrates that there is nothing benign about humanity. However, we can and should be part of the philosophical conversation in the world.

How Should We Approach Our Responsibility Towards Implementing AI?

It Is Important to Consider the Ethical Choices Present

Downstream algorithms are not moral agents that can decide what can or cannot happen based on user choices. It’s not unlike the responsibilities associated with teaching a child; we should be more introspective about what we are teaching AI to learn. Ultimately, a human being made the decision about what metric to optimize for. As the humans behind these decisions, we agreed that it is our obligation (as part of the design process), to optimize for real-world positive actions. Though time spent on site correlates with the likelihood of clicking on an ad, perhaps optimizing model outputs for social good can help overcome some of these negative effects.

Additionally, algorithms are ultimately biased by the data it has been exposed to. It is very important to consider our data sources and understand all the ways in which our data itself can be biased, before using it to train algorithms.

It is important to ask ourselves questions like:

  • What can we do individually to use AI in a way that is squared w/ our moral compass?
  • How can we use this technology to make it easier for humans to do their work?
  • What does the human need that’s hard for humans to do but is easy for AI to do?
  • Can we create space for human ingenuity?

What Implications Does This Have for How We Implement This Technology at Loopio?

Considering all this, we strive to ensure that ML functionality is implemented in a thoughtful way within the Loopio platform. Last year, we unveiled a vision for the future of our platform and the future of RFP response technology, with the introduction of Response Intelligence™️ (RI). The idea behind RI is to allow sales professionals to work smarter, with artificial intelligence (AI) fuelled insights and recommendations that will guide them in taking the best, most impactful actions within their response processes. Functionalities include identifying subject matter experts (SMEs) to assign questions to and surfacing writing tips that have been found to improve writing quality.

While we are confident in the performance of the underlying algorithms, we still need to strive to respect our users’ request for autonomy. We do this by designing the software interface in such a way that allows users to to ignore the suggestions and make alternate selections counter to our recommendations.

Another aspect of respecting user agency is to understand the psychology behind user adoption of our AI features. As part of the initial research into RI, we came across a user who was angry that we would even think of suggesting content that could be better than what the user could write. We understand that people don’t like the idea of being replaced by computers. There is the fear of the unknown, fear of loss, fear of things we don’t understand and cannot trust. In this situation, we have to manage expectations. If we can explain to users what we’re doing, we can replace fear of the unknown with a little bit of known. We need to help users understand that these features are there to aid them and not to replace them. That it frees them to use their discretionary effort to innovate and optimize instead of being locked into mechanical repetitive work.

Through the process of helping users understand, we build trust, and this will allow us to continue to develop helpful AI features in our platform.

Conclusion

As we continue to build and adopt AI as part of our daily infrastructure, it will remain important to keep active human agency in mind in order to strengthen fundamental rights in the digital age.

If you would like to be part of these conversations, check out the career opportunities available across Loopio’s Engineering, Product, and Design teams.

Article contributions from Andrea Ong (Director of Product Design), Suneil Shrivastav (Data Science Team Lead), Aida Rahim (Data Scientist), Anya Macleod (Senior Solutions Consultant), and Jenny Graham-Jones (Software Developer).

--

--