A testing process to improve our EQAI coach

Edd Baldry
Byrd Run Club
Published in
5 min readApr 27, 2021

At Byrd we’re committed to keeping runners running. That means building the best software possible, and that means having a test process.

If you’d rather watch this as a video you’re in luck. It’s here: https://youtu.be/i_6bgy-FFYw

Background

We think it’s important to talk you through how we’re planning to test, especially if you’re one of the runners testing Byrd.

This is especially true given our experience of testing in September 2020. I’ve talked before about the failures there but briefly. In our September testing we put all the responsibility on the person using the app. They had to recognise there was a problem then write us a message. It meant we only learnt about the most critical problems and worse it meant it was a rubbish experience for anyone testing.

Cutting corners like this is a bad long-term strategy. Especially true when we’re dealing with something as complex as our EQAI™️ coach and it’s one billion variables.

Overview

The tl;dr version of what we’re doing this time is:
Google Cloud + Zapier + Email + Typeform + Email

The longer version:
We talk in our whitepaper about ubiety — about creating the right run at the right time for each person — and testing needs to follow that pattern.

To that end we’ve built triggers within the product for the steps in the journey you’re likely to go on whilst using the product:

  • Sign-up
  • Registration
  • Importing historical data
  • Going for your first run
  • Whether that run was on, or off-plan
  • Etc.

Each of these triggers will fire a function within Google Cloud.

Zapier and Typeform

We’re using these triggers for other parts of the product — they’re essential for creating stories for you about your running — but they’re also incredibly useful as sparks to ask questions about your experience with the product.

Because we’re on Google Cloud infrastructure it’s super simple for us to use the Google API to post to a spreadsheet. We’ve given Zapier — an automation service — access to that spreadsheet.

We’re firing off an email when one of the triggers is detected. Depending on the trigger it’ll send a slightly different email but they’re all following a broadly similar pattern.

We’re using an embedded Typeform within the email that is doing sentiment analysis using a Likert scale before asking any relevant follow-ups to better understand the context of the situation.

The questions are all either ranges or choices. We’ve deliberately avoided free-entry text forms because they dramatically increase your cognitive load, reduce the number of completions and tend to give data that’s complex to parse.

Ending on email

Completing the form sparks another email

If everything’s going ok with your experience of Byrd we’ll say thanks and let you get back to running

If things aren’t going ok with your experience of Byrd then we’ll be doing a couple of things

One — and most importantly — we’ll give you the chance to book some time to talk with us. We’d like to hear in more detail about what the issue is and how we can make it better for you. We’re using Calendly to do that.

Two — there’ll be a bug report going into Github, which means that we can start working on the problem. It means that even if you don’t have time to catch up with us we’ll be able to start working on a fix.

Why email? Why Zapier?

We’ve been talking about the death of email for years. It’s clearly the least cool communication channel. But our experience of Slack, Facebook messenger, WhatsApp, and SMS has been that they all bring their own problems. Email has the balance of being both immediate but easy to stay on top of. It’s the way most of you have been contacting us in the last year so we’ll stick with it.

More importantly, why Zapier?

There’s an obvious downside to Zapier, which is that it costs money.

But the benefit of Zapier is that we’re avoiding falling into the Not Invented Here (https://en.wikipedia.org/wiki/Not_invented_here) trap.

We’re doing smart things with Byrd (if we do like to say so ourselves!) so there’s the temptation to do everything ourselves. But start-ups die when they lose focus. Building in a feedback system to Byrd is a distraction since it’s not a core piece of functionality. It won’t be the thing that keeps people running and we don’t want anything to distract us from that mission.

It also ties in to our principle that Byrd is more than just an app. It should play nicely with all your other products and be available wherever you are. Ensuring that our testing is interoperable — and simply sending an email at the right time — fits that principle.

A flaw in the process

To acknowledge there’s an obvious flaw in our testing approach.

Daniel Kahenman — the nobel-prize winning behavioural economist — talks about the remembering-self and experiencing-self. You’re very different in these two states. It’s why as designers we tend to hold that observational data of someone in the act is substantially better than retrospective data.

Your memory of using Byrd and your experience of using Byrd will also be clouded by other events. Asking you after the fact is asking your remembering-self to recount the experience.

That’s obviously suboptimal, but is infinitely better than not asking the question.

We’re also trying to make the questions as smooth as possible to avoid your remembering-self thinking too much.

Using scales and options puts the least friction — so the least cognitive load — between you and the answer. In theory it means that the response will be slightly closer to what your experiencing self encountered. It’s easier to feel whether something was a two or a five more easily than trying to put something into words and typing it out.

On our end we’ll also be being cautious with the results, using observational data available within analytics to reference things. We want to make sure that if you’re taking the time to help us with testing then we take the time to ensure we can use the data you’re sending back to us and that we can improve the product to give you a better running experience. We’re looking forward to learning what we can improve.

Sign-up

If you haven’t already joined sign-up to the wait-list at byrd.run. If you answer the extra questions around testing you’ll have the chance to test Byrd before anyone else does. We’re excited to see how you experience Byrd!

--

--