Whose Voice?

Indi Young
Inclusive Software
Published in
5 min readAug 24, 2020

About those “well meaning” notifications …

There’s a million apps that want to help you plan your time better, get fit, lose weight, pick up your kids, and remember to order the groceries. These apps all propose to make your life a little better. But the communications you hear and see from these apps contain mixed messages.

This morning I got a message from my Android phone saying that it would like to help me know in advance about traffic on my commute. My commute is a walking commute and takes about 10 seconds, because my house is small. Supposedly this perkily helpful little commute-traffic message went out to all the transit riders, all the night-shift workers, all the cancer patients and retired people and folks who live outside of urban areas, etc. If all these people add up to roughly 50% of Android users, and there are two billion users world wide, and it took 2 seconds for each person to read, comprehend, and delete the notification, did Google just waste 23,000 days worth of time? Google might be able to tell if a person commutes in a vehicle based on the daily location information they’re reputed to be tracking. But they did not use that information to filter recipients — or more likely, they know that I have location turned off, and they’re trying to entice me to turn it on so they can collect more data.

In any case, they wasted two seconds of my day, and probably a few other people who also have location turned off. Phishing, in a way. Who are these people who decide to push these notifications? And who gets assigned to write the little messages? Have I met them at a conference or a workshop?

These messages follow an annoying pattern. Take any random fitness watch, and the experience begins with the assumption that you are trying to get fit by walking or running. Really, it’s the accelerometer, gyroscope, or other inertial sensor in the device that assumes you are walking. Fitness is implied. Behavior modification is assumed. I was having dinner one Sunday with a friend of mine, and he would periodically wave his arm around emphatically. I asked about this new habit, and he admitted he was on a team at work competing in a corporate step-count event. His arm-waving hack was brought on by this message:

Garmin fitness watch message: “Insight: We’re seeing a pattern on Sundays.”

As I read the message a Jack Nicholson-like voice started repeating it in my mind. “We’re noticing a pattern on Sundays …” Chilling. Creepy! Wave your arms!!

I was telling my intern about this story, and she laughed and told me about a road trip she was on with a friend whose watch wouldn’t stop reminding her that she needed to get up and move about.

A few months later my friend Mike Kuniavsky tweeted something in the same vein, about his fitness app’s message sounding a little passive aggressive.

Google Fit’s message: “Let’s adjust your goals. It looks like your goals are too high. Let’s adjust them so they fit you better.”

There is a thing in common between these two messages, possibly the root cause of the psychopathic tone. It’s in the grammar of the sentences. A simple sentence is made up of a subject+verb+object. In both of these messages, the subject is “we” or “us” (as in “let’s” — “let us”).

Who is “we?”

Maybe I’m the only one wondering this, but who is the voice of these messages? Is it the team who wrote the software, offering a helping hand to the user? Is it an unnamed set of coaches? It’s weird to say “we” or “us” without identifying any people or algorithms. And if it’s an algorithm, why the plural? I’m all for multiple algorithms working together, but please identify yourselves clearly. What’s with the secrecy? Is there a problem with admitting that it’s an algorithm making the messages appear? (Which leads to my eternal question: Why are so many people fascinated by the Turing Test? Why is that a good goal? The Turing Test assumes that we don’t want to be able to discern between a person and a ‘bot. That might be true for some thinking styles, but not for me.)

Other examples of messages that have “we” or “us” as the subject of a sentence include notices from a homeowners’s association (“we see that you haven’t mowed your lawn in a while”) or maybe an oversight or review committee (“while your application has merit, we think you could have done more research on the topic”). In these cases the “we” group have defined the bar you must reach. In the case of the fitness notifications, I’m not sure you get to define the bar, except in terms of number of steps, so it feels like a similar situation. A group is defining what is “good” and what is “bad,” and you must follow along. I think this is where many people rebel, and also where the fitness apps stumble.

Instead of setting the subject of the sentence to “we,” what about setting the subject to “you,” the fitness-seeker? “You haven’t walked [your goal] the past four Sundays. Want to reset goals by day, snooze, skip today, or something else?” It makes a lot more sense grammatically. It will remove the passive aggressive tone of the messages.

The time to use “I” as the subject of messages from an app is when that app has a name or a named function. Alexa, Siri, (heh) Clippy, for example. You can use “we” if your app matches the user with a group of people for feedback and motivation — but only if that group of people have really conferred with each other and written a group opinion. So, if you find yourself writing these kinds of messages and you’ve included “we” or “us,” then take a step back and see if you can discuss a different direction with your team.

Some references:
On RadioLab, there was an episode that touches on a similar theme, “More or Less Human.” (17-May-2018) The episode explores the fascination with making an algorithm that mimics human conversation. There’s an associated YouTube video, Robert or Robot?, that includes a reference to Eliza, an algorithm written decades ago by Joseph Wiesenbaum to explore speech recognition and conversation. (He was apparently upset at the degree to which people, notably his assistant, would tell Eliza their life’s woes, projecting human intention onto his algorithm, even when they knew it was an algorithm.) In this video, writer Brian Christian expresses the hope that humanity can eventually “use tech in a way that doesn’t distance us, but in a way that enables us to be more fully human.” (Beware the dream of the technologist, heh, against the tide of money and power.) I was intrigued by the VR program wherein the user plays the role of both the therapy client and the therapist, conducting a therapy session from both viewpoints allowing the user to gain insights on their own thinking simply by role playing in a convincing context.

Henriette CramerAssessing and Addressing Algorithmic Bias — But Before We Get There…
Sarah MennickenChallenges and Methods in Design of Domain-specific Voice Assistants
Afshin Mobramaein — Talk To Me About Pong: On Using Conversational Interfaces for Mixed- Initiative Game Design

--

--

Indi Young
Inclusive Software

Qualitative data scientist, helping digital clients find opportunities to support diversity; Time to Listen — https://amzn.to/3HPlESb www.indiyoung.com