Defining a ‘medium sized’ user research method for behaviour change interventions

Abbie McLeod
Etc.Health - Research & Design
8 min readMay 10, 2024

The healthcare space isn’t all about medication or surgical interventions. The biggest thing an individual can do to maintain, or improve, their health is to make positive lifestyle choices; choices that aren’t always the most straightforward or appealing options on the table. Would I rather walk to the train station in the cold, or drive there in my nice warm car? I know which I’d rather do in the moment, but I also know which would have better outcomes for my long term health. This is where behaviour change comes in.

The discipline of behaviour change is all about methods and practices grounded in psychological science, that aim to change people’s behaviour.

At Etc. Health, we’re using a number of these frameworks to design interventions to support our users in making lifestyle choices that will serve to make them feel good today, and live well as they age.

We took a collaborative approach to firstly define the target behaviour we wanted to influence, then diagnose the factors impacting on that behaviour, and finally select design interventions to enable and encourage it.

If you haven’t already, you can read about our process in this article by Product Designer, Vicky Onsea.

The Etc. Health team hard at work during an ideation workshop

The outcome of these collaborative sessions were two behavioural change techniques that would become the focus of our intervention design; reward approximation and commitment. From the 14 ideas generated during the ideation workshop, three top features were identified and taken forward to prototyping.

At this point, the baton was handed over to us — the research team. How could we evaluate these ideas to determine whether or not they have the potential to motivate our users to make positive lifestyle choices?

When I looked into tackling user research for behaviour change interventions I found two options: usability testing or randomised control trials (RCT).

But that didn’t feel quite right to me. Usability testing was a little too small. Sure, our feature needed to be usable and aesthetically pleasing, but could this method really give us an indication of whether the intervention had the sticking power to support the user in making lasting change?

A RCT meanwhile was too big! We wanted to take a call and move forward, and didn’t have the time nor resources to do something so substantial as this.

I needed to find a ‘medium sized’ research method which would give us the best of both worlds, within a reasonable timescale and budget. Here’s the approach we took:

  1. Participatory Design
  2. Usability testing
  3. Diary Study

Participatory Design

The ideation sessions we’d run within the team during the workshop phase generated a handful of proposals for interventions to help bolster users’ motivation to build healthy habits. But what should these look like? We knew there were similar ideas out there and amongst our team we had differing opinions about these. Our first step, then, before testing out the efficacy of something new, was to find out what works well (and not so well) about similar concepts already out in the wild.

We decided to run a participatory design workshop, to get folks sharing their thoughts and doing some hands-on creative work to help us really understand the good, the bad, and the ugly of motivational app features. We were working to a limited budget and timescale for this project, so we recruited colleagues from across the wider business to take part in two sessions, which we conducted at our Manchester HQ.

These sessions began with a group discussion about health behaviours and motivation, followed by showing participants a series of mood boards we’d compiled and asking them to post-it note their thoughts in pairs. These mood boards included a variety of different app screens embodying one of our chosen intervention methods — reward approximation — including data visualisation, data ‘wraps’ (weekly, monthly and yearly), as well as notifications, widgets and nudges.

Participants reflecting on the mood boards in our participatory design workshop

Finally, we asked participants to ‘collage’ their own ideal motivational app features. We provided them with print-outs of the same screens they’d seen in the mood boards to cut and stick, or they could choose to draw and/or annotate what they’d like to see on their apps.

We came away with some fantastic ideas and some clear themes for our product designers to start working on what our features might look like.

Participants creating collages of their ideal motivational app features

Usability Testing

I know I said usability testing felt too small for behavioural research… But for part of our intervention proposal, it was just the ticket! You may have noticed that our participatory design workshops focused on reward approximation, and didn’t involve our other chosen behaviour change technique; commitment. The reason for this was that we already had an idea for how this might work in our app. We decided to go ahead and test this using our UX research platform to get some quick insights into folks’ thoughts on our concepts.

We built a prototype of the user journey that went from getting a recommendation on how to improve their health, through to setting and committing to a goal, to finally tracking completion of said goal in the app. We sought feedback from eight participants recruited according to our target audience.

This was just enough to give us some really great insight around improvements we could make to the overall experience. We also learned that commitment features like this are surprisingly divisive… One task involved participants long-pressing their thumb on the app to pledge their commitment, which certainly came across as a novel feature and which most of our participants were in favour of. But two were almost offended by it! This didn’t send us back to the drawing board by any means but we’re refining the idea so it’s more appealing to a wider group of users.

Whilst I still feel we won’t know from this study whether or not ‘committing’ to their goal in this way will actually change users’ behaviour, this is an example of where usability testing can be a useful as one step in the research process, helping to refine how users actually experience the BCT in a product or service.

Diary Study

Having prototyped what our reward approximation interventions could look like, we wanted to evaluate whether these were likely to help influence behaviour change amongst our users. To do this we needed to design an experiment that would have the user engaging with the feature in their own context and over a longer period of time, without going so far as a time consuming and expensive randomised control trial.

To achieve this, we conducted a kind of RCT-’light’ in the form of a diary study. We recruited 13 participants to take part in a four-week study, comprising of two 30 minute interviews before and after a three-week tracking period, during which they submitted daily step counts and a once-weekly questionnaire via WhatsApp. Half of these participants were a control group, given the same ask (meet a personalised step goal and track your steps daily) but not given the interventions. The rest of the participants were the intervention group and did receive the behaviour change interventions we were testing.

This approach allowed us to approximate the experience of using the app (tracking data, receiving a goal, nudges, and our interventions) with a readily accessible metric (step count) in the user’s own context and with a familiar tool (WhatsApp).

Using the WhatsApp web platform allowed two researchers to be in communication with the participants, while a shared data tracker meant the design team had real-time access to the diary study data in order to construct the intervention assets. The research team messaged the participants daily with tracking reminders and motivational nudges, and the intervention assets were sent out twice a week on a Monday and Thursday.

Of course, this experiment was small scale so we knew going in that the results wouldn’t be statistically significant. However the benefits of the approach meant that we could get some qualitative insight into the effectiveness, usability and acceptability of the interventions that we simply wouldn’t achieve through usability testing alone. This was a time consuming, manual exercise and I’m not sure with our small team we could have included many more participants — we sent and received over 1,053 WhatsApp messages over the course of the three weeks, as well as spending 13 hours interviewing participants and countless more analysing data and managing logistics. Most importantly though, we were really happy with the results we got from the experiment.

Here’s some of the highlights:

  • Participants in the intervention group exceeded their step count by an average of 42% vs. 8% in the control group. There were only marginal differences between tracking adherence (~99%) and self-reported motivation.
  • We identified some key themes affecting users’ motivation which will help us to personalise the experience, including the weather, holidays and birthdays.
  • Participants felt it was motivating just having a goal and this made them more conscious of their activity levels. They also loved that there was a person there keeping them accountable; our challenge is how to replicate this in an app experience.
  • We were really heartened by the fact that every one of the participants stayed in the study through to the end, and also how many reported that they’d found it really motivating. One of our participants told us he felt fitter than he had since his footballing years whilst another was planning to enter a running race after several years out of the habit.

Summary

The whole journey, from the start of the behavioural design process, to the conclusion of the diary study, took eight weeks. The research phase spanned five of those.

We are fortunate that we already had a research panel in place so recruitment was straightforward, and we already had a wealth of generative research to draw on to inform our initial design process.

This ‘medium sized’ research approach enabled us to secure evaluative feedback at low cost and within a reasonable timescale. There were definitely some limitations which, if another researcher wanted to replicate this, I’d recommend adjusting for:

  • Our focus groups were run with colleagues which meant that they weren’t hugely demographically or geographically diverse. The in-person format worked really well but if we were repeating the process, we’d likely recruit externally and run the sessions outside of the city centre.
  • We’d account for time to iterate and/or preference test some alternative examples of how the commitment BCT might work in-app.
  • We’re conscious of the Hawthorne Effect on our diary study, as well as the fact that the participants are being paid an incentive. In addition, we chose WhatsApp because this was a tool that our participants were familiar with. However, it does mean that a key barrier to use of our app — that is, just building the habit of using a new tool — was not tested in this experiment. We’re looking forward to replicating the study once we have our interventions in the app and being used in earnest.

In any case we hope that this approach gives some inspiration to other researchers facing a similar dilemma about how to evaluate behaviour change approaches. We really enjoyed this journey and are excited to see how our features perform once they’re in the hands of our users. Stay tuned for an update as we roll them out over the next few months!

--

--