Chatbots Can Support Needs, Too

Three Tips to Improve Your Scripts

Steven M. Ledbetter
Practical Motivation Science
5 min readJun 28, 2018

--

“The most ordinary conversations are fraught with life and all its meanings.” — John Haugeland, Having Thought

Habitry gets asked to help out with a lot of chatbot design. We got our start teaching health coaches about how to have need-supportive conversations with their patients and clients, so there are a lot of obvious similarities. For example, both coaches and chatbots attempt to use words to influence human behavior. And despite good intentions, both coaches and chatbot designers have no control over the impact those words will actually have.

But here are three things we’ve learned to really up the odds that the intent you put into your chatbot scripts will match the impact they will eventually have.

Be Interested

People project a lot onto chatbots. This phenomenon has been around since the first “chatterbot.” Joseph Weizenbaum created ELIZA in 1964 and programed her with a script called DOCTOR. Weizenbaum wrote the script to parrot back basic reflective listening techniques he learned from watching the OG DOCTOR: Carl Rogers. The pairing of humanist psychology and cutting-edge technology resulted in the first viral software sensation. Scientists all over the world smuggled copies of ELIZA onto their multi-million dollar IBM 7094s and began telling her their innermost thoughts via teletype. If you know anything about the power of reflective listening — it’s at the heart of all counseling modalities — then you can probably guess why ELIZA was so effective: she seemed genuinely interested, which supported people’s Basic Psychological Needs for autonomy, competence, and relatedness.

The Prime Directive: Support Basic Psychological Needs by being genuinely interested and letting that show up in the words your chatbot says.

Avoid Being an Accidental Asshole

Almost all of Habitry’s consulting engagements start with a content audit. We take a scien-tastic approach that involves SDT-based questionnaires and multiple trained raters, but at the end of the day a “content audit” is just a fancy term for “reading everything you’ve written then giving feedback on it.” After our content audits, many of our clients are shocked by the feedback from our raters. “I didn’t mean it like that!” they say.

Coaches get immediate, visceral feedback when the impact of their words is different from their intent. Tears, for example, are immediate feedback. But sometimes coaches know even sooner — as soon as they hear the words coming out of their own mouths — that they have just become an Accidental Asshole.

We’ve noticed that chatbot designers do not have this immediate, visceral feedback, so Omar Ganai and I went back to our multi-billion dollar Habit Lab to come up with a fool-proof solution for chatbot designers everywhere. And after years of research and only a few toxic explosions, we finally cracked it.

Read all your words out loud.

Yup, the fastest way to hear if your bot is an Accidental Asshole is to read the words that it’s going to say out loud. You can even take it an extra step by getting into character and talking to your coworkers as your chatbot. And yes, we actually do this in our workshops (sidebar to coaches: yes, it’s just Roleplaying but many product teams don’t do it and need our help to learn how).

Only Offer Actual Choices.

Many chatbot scripts are written as a “Happy Path” that designers want people using them to take, with edge-cases written later that force people back to the Happy Path. Sometimes, the result of the Happy Path approach to chatbot design can be that “choices” offered by the bot don’t feel like actual choices.

Bot: “Would you like to learn more about our services?”

Human: “No.”

Bot: “OK, I’ll ask you again later.”

This exchange is not autonomy-supportive because it’s obvious the designers are going to pester you with this question until you say “yes” and then begrudgingly listen as it rattles off what it can do while you ignore it and look at instagram on your phone. The question the designers actually wanted to ask was, “would you like to hear about our services right now?” but they asked it this way to give the illusion of a choice. To seem polite, while not actually giving the human any real control over their experience.

In Self-Determination Theory, these illusions of choice are a special category of phrases called “pseudochoices.” The phrases look like choices, except they are given in the context of a coercive relationship that precludes any actual autonomous action. Some other popular pseudochoices are:

  • “George W. Bush: Great President, or Greatest President?” — Stephen Colbert
  • “My way or the highway.” — Your Dad.
  • “You can always go get another job.” — Your boss.
  • “Love this country or get out.” — Your uncle.
  • “We aren’t forcing you to join the Army; you have the choice to go to jail instead” — the Colonel who informed my Father-in-Law he was being drafted in 1972.

These do not feel like choices because they aren’t choices. Many of them are thinly veiled threats; reminders of a person’s lack of autonomy. Which is why many chatbot designs that begin with a Happy Path end up feeling a little like that Stephen Colbert quote. These pseudochoices are a peek behind the veil to the Tristan Harris quote, “if you control the menu, you control the choices.”

The way to avoid these icky-feeling pseudochoices is to ditch the Happy Path you want people to take and start instead by being genuinely interested in all the paths that people could take. What’s the problem you’re helping them solve? What are the choices these people need to make in order to manage or solve this problem? What information do they need and when do they need it in order to make those choices? These are the actual choices the people using your chatbot need to make.

When you map the answers to these questions, you will suddenly see that there is not one Happy Path, but that there are lots of little paths that your chatbot can help people navigate while maintaining their autonomy. And by focusing on actual choices instead of writing words that only sound like choices, you’ll be supporting their Basic Psychological Needs instead of accidentally thwarting them.

With that in mind, let’s look at another way the above exchange could have gone.

Bot: “We offer lots of services designed to make your life easier. Would you like to hear about some of them now?”

Human: “No.”

Bot: “Ok. Would you like me to remind you about them at a more convenient time?”

Human: “Sure.”

Bot: “Ok, I’ll ask you again next week. And you can always ask me about them by saying, ‘services’ or ‘what can you do?’”

Even if that had gone differently, the designers could still support autonomy with a subtle shift.

Human: “No.”

Bot: “Ok. You can always ask me about them by saying, ‘services’ or ‘what can you do?’”

Instead of designing a Happy Path that’s just, “how can we force people to listen to an ad for our services?”, we gave the users actual choices about if and when they wanted that information.

“Do you want to hear about this now?” and “Can I remind you later?” are now actual choices instead of pseudochoices and our bot seems genuinely interested in the human’s experience because we took the extra time to be interested in them.

--

--

Steven M. Ledbetter
Practical Motivation Science

A.K.A "Coach Stevo." VP of Product Engagement at Supernatural. Motivation Science Nerd.