Empathizing with Algorithms

from ACM Interactions / July — August 2016

Uday Gajendar
The Designer’s Speakeasy
7 min readJun 28, 2020

--

Human-computer interaction, as implied in the phrasing, involves approaches for exploring, enabling, or optimizing the relationship between people & computational systems. There is a negotiation of intent between users and systems via discrete combinations of controls with fairly constrained yet recognizable behaviors: buttons, tabs, switches, dials, text fields, icons, and so forth — both pixel- and atom-based. Such elements make up the interface, helping users accomplish some task that may occur across a range of forms or screens. Sure, it all seems quite complex, yet it’s fairly direct and tangible. To put it bluntly, there is something there to interact with. It may not be entirely comprehensible, but it’s embodied and visceral, with some perceptible affordance and feedback.

Yet today we are speeding toward a new frontier for HCI, where the computational aspect is no longer quite so… there. It even seems to have a mind of its own that evolves over time, anticipating and predicting actions or desired outcomes, supposedly on our behalf. I am of course referring to the pervasive rise of AI: enhanced algorithms built via machine- and deep-learning systems that thrive on cloud-based networks, copious crowdsourced data, and brief momentary human interventions to help correct errors along the way. Whether embedded within self-driving cars or the much-hyped Internet of Things (IoT), conversational interfaces, voice-based virtual assistants, and predictive analytics are all profoundly altering the nature of computing.

Consider the expanding gamut of algorithmically driven experiences. Netflix’s and Amazon’s recommendation systems quietly started this shift, and then Siri exploded into pop-cultural consciousness. Before long we had the Nest learning thermostat adjusting your “comfort level,” Google Now suggesting when to leave for your appointment, Tesla pushing overnight updates for self-parking and lane-switching, and Amy the x.ai virtual assistant coordinating meetings with real people (or maybe their own bots?). This ain’t Clippy anymore, haughtily presuming you’re writing a letter to Mom! Things have certainly… evolved.

Indeed, these systems seem to function on their own via an opaque, hidden animating force. They go deeper into the sphere of interaction, following inscrutable rules of behavior whose limits or capabilities are rather vague to regular folks — yet seem to be changing as they “learn.” How does one make sense of and engage intelligently with these systems underlying so-called smart objects or interfaces? How do we handle them as HCI professionals, applying our core UX principles and practices? Indeed, what does it mean to design for the human-algorithmic experience? This strikes at the core of HCI’s role in this emerging world.

Perhaps there’s a clue in the uproar over Twitter changing its timeline stream to be algorithmic, from the previous model where posts are set in reverse chronological order (with most recent items at the top of the screen). Facebook is totally algorithmically driven — we all complain about seeing the same five friends’ posts in our feed, despite having more than 250 friends, right? Interspersed with ads, ads, ads! All of which muddies (and sullies) their intents, seemingly driven by ad sales revenue, not “bringing the world closer.” The experience becomes a mystery. Instead of joyous serendipity, there’s some dubious manipulation, shifting the dynamic while eroding trust and respect for the product — or at least creating some cynical notion of online “friends”. Can a product placement or brand be a friend, really? Hmm…

Many users want Twitter to be a pure, unadulterated feed with transparent clarity in how it behaves, setting expectations for the behavioral model and thus how to interpret the content it presents. What the Facebook activity feed vs. Twitter timeline order suggests is this: The nuances of the user-product relationship have been subtly distorted. Algorithms are being woven into the fabric of daily life, influencing how we choose a movie, research medication, and book a flight. Who is in control, or controlling whom? Is the point to improve the human condition or to make a buck? This may lead to users trying out maneuvers for coping and compensating (i.e., gaming the system) to somehow circumvent or manipulate the algorithm, complicating the relationship even further. How the system behaves quietly yet powerfully shapes our responses and implicit values therein. Throw in some mysterious (and presumably IP-protected) “smarts” and things get messy, fast! That product relationship no longer feels natural. The seams are exposed; the fabric of digital life is fraying at the edges.

So, algorithms clearly add interesting dimensions to HCI practice. How do we judiciously shape such encounters to be more positive and clear for both the user and the algorithm, in a balanced dialogue where interaction creates value.

I would boldly suggest that to properly enable this, we must empathize with the algorithm! What does that mean? Let me clarify. I’m not saying HCI experts need to program these smart algorithms. We must leave that to the experts in machine-learning protocols (And culturally/ethnically diverse experts, to broaden the AI range). Yet we should try our best to understand the nature of such algorithms — their rule definitions as well as the evolving signals and contexts for learning — to form causal connections. For instance, how does the damn thing connect the dots to draw inferences and suggest the proper actions for users on their behalf? What are the governing conditions that make the algorithm function effectively, and toward what aim? What are all the data sources fed into the algorithm? And who generates that data? From where? How is that data being scrubbed (and against what rules or constraints)to ensure a suitably diverse dataset that broadens the algorithm’s awareness of meaningful possibilities (i.e., accounting for various gender, ethnic, or cultural differences)?

Through involvement in defining the algorithm, HCI professionals may contribute unapologetically humanistic or social insights, perhaps lost on an engineering team laser-focused on the feasibility of an algorithm’s computational qualities.

But how does one empathize with a smart algorithm? It’s a rather provocative notion, particularly if you’ve seen Blade Runner or Ex Machina, which are becoming cinematic prophesies, not mere fantasies. While applying an actual Voight-Kampff machine test might seem kinda cool, let’s remind ourselves that empathy is about understanding a situation from the other’s point of view, given their context (tasks, goals, values), so that we can create a meaningful encounter balancing everyone’s vested interests. This of course includes users and stakeholders, but we must also consider how this artificially constructed intelligence with a programmed sense for learning and growing sees a situation, from its “eyes.”

Here are some methods we might try, borrowed from the standard HCI canon:

  • Storyboard key scenes with the algorithm present and observing, taking in various kinds of data and drawing certain conclusions, conveyed back to the user in some communicable manner. What are those data elements? How do they relate to the user or physical space or state of motion?
  • Act out being an algorithm — body-storm it! Embody that sense of agency-as the algorithm is now an actor in the scene too-with a curious, unsuspecting user. What is the algorithm trying to do? What are the triggers/invitations to engage and the consequences of implied actions, and how are errors handled? How does a human interact with a misbehaving algorithm?
  • Write a play-like script for algorithmic dialogue with users. Consider the actual or implied conversation between users that is also mediated via the smart object/device/interface. Delve into the “call and response” repartee aspects, maybe as if Aaron Sorkin had written it (even with those dramatic “walk and talk” moments!) What parts would be physical reactions or verbal responses or text replies? Or something else? Also consider the users’ emotional states and the background details (time of day, level of noise, other people around, etc.).
  • Visually map out all the system touch-points between users and smart objects/interfaces. This might help clarify the overall scope of impact, the ripple effects beyond the momentary encounter. When does algorithmic (and also user) learning occur? What are the relationships of all the pieces in terms of data inputs, outputs, and so on? Consider the emotional valence of certain elements in that system (actors, messages, actions, consequences).

These approaches could help us move forward in sensing the world through a smart algorithm’s eyes, improving the user’s interactions with them and making them more natural and useful.

But why is this important at all? As computational devices are rapidly becoming intelligent systems that grow and adapt in support of human tasks, it is a primary duty of HCI/UX professionals to ensure these advancements are truly improving the human condition, enabling real relationships and supporting our daily activities. These systems should have triggers that notify users about what’s happening (and why) and allow them to intervene, change settings, and correct errors. They should offer a proper feedback loop to build users’ confidence and assure them that the algorithm is indeed learning and humming along nicely. Interactions should be extremely polite, helping make the human-algorithmic experience more dignified — in a word, charming, without being over the top.

To help users cross the chasm and fully engage with algorithms in their devices and services, seeing how the algorithm perceives the world (or at least the user) is a big step toward enabling smooth, graceful, charming interactions that feel authentic. Ironically enough, one must in effect be the algorithm to deliver a human experience!

Copyright held by author. The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.

Originally published at https://interactions.acm.org.

--

--

Uday Gajendar
The Designer’s Speakeasy

Design catalyst / leader / speaker / teacher. Always striving to bring beauty & soul to digital experiences.