Task/Reward Experiment: UX Case Study for Findmypast

www.findmypast.co.uk

Ryan Connaughton
9 min readAug 18, 2018

Introductory Context

Findmypast is a family history web-based product. You can primarily do three things on said website:

  1. Search a collection of 8 billion+ records for your British & Irish ancestors
  2. Build your family tree which in turn generates ‘Hints’ of possible ancestor record matches
  3. Combine 1 & 2 to make discoveries and build a picture of who your ancestors were

Now, in the case of this experiment, we’ll be focusing on the user journey of number 2, building your family tree. In order to give you better context before going through my approach, here’s what the current journey looks like:

Old journey

Problem Statement & Goal(s)

First things first, we needed to frame the problem and define our goals. In order to do that, I gathered a group of longstanding ‘specialists’ within the business who had appropriate deep-seated knowledge in different areas and disciplines — the outcome of which can be read below:

Our tree product is intended to acquire new beginner users within their first session.

We have observed that our tree product journey is overly-complex and is at present largely suited towards intermediate users. This, we believe, is causing people less familiar to family history to drop off in high numbers.

How might we improve our tree product to better onboard and retain beginner users resulting in higher conversion/paid subscriptions?

Our deadline for this was just two weeks so we had to move quickly.

Defining The Users

The next step was to bring the appropriate teams up to speed of what it means to be a ‘beginner hobbyist’. Below you can find the pre-defined user profiles that we were working with.

Pre-defined User profile Canvas based off of extensive research (user pains/gains etc.)

It’s worth mentioning also that at this point in time, I had been leading on the UX of the tree journey for some time and had already conducted 30+ user interviews and tests previously. This helped better guide my decision making on this project while also consulting the analytics team regarding any quantitive data questions we had.

Solution Ideation Workshop

With the scene set, I gathered a mixed-disciplined in-house team to a meeting room where I facilitated a design studio workshop — this is essentially a process to generate and bounce many diverse ideas off of each of the participants extremely quickly while also building shared understanding and acquiring stakeholder ‘buy-in’.

I made it clear to the group that the goal of this workshop was not necessarily to create a single, final solution. Rather, they are simply helping to refine the overall design direction that will continue to be explored after the design studio workshop is over.

Design studio workshop format

The format for this was 3 short rounds of sketching and individual presenting/critiquing. Due to quite a large group and a 2 hour window time constraint, I explored the idea of adapting a simplified version of the ‘6 thinking hats’ (https://en.wikipedia.org/wiki/Six_Thinking_Hats) — assigning them into 2 groups of either a ‘green hat (the optimist)’ and ‘red hat (the pessimist)’, which meant they had to give their feedback from one of those two ways of thinking. I then had them rotate hats/roles each round.

Design Studio facilitation — featuring optimist/pessimist colour coded ‘thinking hats’ and post-its (Note: these are not real hats and are superimposed for illustrative purposes only — although that would of made a nice touch, wouldn’t it? :))

I found this new experimental hat method resulted in the discussions being much more concise which really helped overcome our time constraints.

In addition to successfully making the rounds much shorter, I was told the hat method helped their thinking be more focused while also making them much more comfortable in giving ‘negative feedback’, which tended to be a problem for participants from previous experiences conducting similar workshops.

Outcome

Finally, with colour coded feedback post-its scattered on each pinned up design around the room (which gave a visual indication of which were the more like ideas). We finished up by summarising the overall themes which were:

  1. Task guidance onboarding
  2. ‘X’ free ‘Hint(s)’ (our usual model was to have them subscribe before access)
  3. Bite-sized education
Following each workshop I conduct, I send out a short 3 question feedback survey for continuous improvement

Choosing Which Solutions to Test

Next up, having generated themes from the design studio workshop, I gathered a new group together for a whiteboarding session consisting of the product manager, a content strategists and a few of software engineers in our team (this would also allow them to feedback to the other engineers in our team that couldn’t make it) as and when needed who would potentially be building this feature if our hypotheses were validated.

Even at this early stage in the process, it’s important for me to get a diverse range of inputs in order to maintain as much consistent shared understanding and team accountability as possible, while also gaining valuable insight into technical feasibility.

Together, after exploring many possible avenues, we settled on testing the following two hypotheses:

Hypothesis #1 — Task Onboarding
We believe beginner hobbyists within their first session(persona)
Will be more likely to engage in our tree product (behavioural success)
By guiding them with instructional tasks (feature)

Hypothesis #2 — Free Hint Reward
We believe beginner hobbyists within their first session (persona)
Will be more likely to complete tasks and experience value (behavioural success)
By rewarding them with free use of a hint after completing 3 tasks (feature)

Left : Key themes from workshop / Middle: Prioritised tasks (impact vs. difficulty) / Right: Hypotheses

Task selection & prioritisation

Before we did that though, we needed to flesh those ideas out. We needed to decide on which tasks we should assign to users and in what order.

First, using post-its, we listed out all the possible tasks that could be carried out on the tree. Then, we mapped out which ones we felt brought the most value to the user followed by how complex each of those tasks was by the number of steps they had to take to complete them. We then ranked them all ‘high’, ‘medium’ or ‘low’ in impact and difficulty which we then used as a guide for prioritising their order.

Naturally, we thought it best to start with the easiest and most impactful tasks first and work down from there. We decided on having the ‘free hint’ reward after 3 successfully completed tasks.

It wasn’t perfect, but with such a short deadline at hand, we had no time to dwell. Our thinking was to get something in front of users as soon as possible to test, where we could then quickly iterate accordingly if our decisions were less than optimal.

Impact vs. difficulty task prioritisation method

Design & Testing (part 1): Low-fidelity

Now equipped with the content, clarity on task selection and order, I began sketching out a bunch of wireframes.

I tested these internally with different colleagues around the business. One of the key iterations made here was the introduction of a permanently viewable step-by-step ‘text instructions’ bar for each task as to not force the user to repetitively minimise and maximise the below tab when interacting with the product (this will become more clear in the next section).

Sketches: One of the more successful internally tested wireframes

“It’s pretty tedious that I have to minimise and maximise these tabs in order to see what my next step is.” — In-house user testing participant

Design & Testing (part 2): Mid-fidelity Prototype

Following a few iterations on the user tested sketches, I decided to move quickly to a mid-fidelity wireframed prototype. I decided on greyscale at this point in order to focus the discussion and feedback solely on the idea concepts rather than the aesthetics.

External user testing

Again, due to a looming deadline, we didn’t have adequate time to test both the mobile and the desktop versions. Suffice to say though, a high majority of our users tended to use higher screen resolutions when actually using our tree product (i.e tablets and desktops), albeit mobile is higher in initial ‘sign-ups’.

Round #1 user testing participants

With that said, we (a user research specialist and myself) did two rounds of testing — with each round consisting of 5 participants (10 total). Starting with an initial short user interview followed by the test itself. All successfully matched our ideal user profile demographics.

Card sorting helped us evaluate our participants thinking and how they value each ‘task’ in order of importance

Card sorting

Towards the latter part of the interview, we had participants do a card sort in two parts. We had them firstly write out on post-its the tasks they would like to see and rank them in importance. Then, we had them rank the tasks we had put together ourselves in order of importance and questioned them on their reasoning for each.

Mid-fidelity prototype #1 (key screens)

User testing key learnings

  1. The result of the card sort was extremely enlightening. It hit home how different our thinking was in regards to our task selection decision making and descriptive language style. While we were thinking in terms of actions to take on the product itself, our participants were thinking more of the outcomes they wanted to achieve.
  2. The descriptive word ‘task’ really didn’t resonate with users. They felt it sounded ‘chore-like’.
  3. It wasn’t clear to users that the ‘4th step’ was actually a reward and at a glance, they assumed it was just another task having not read the content of it at this point (this is where greyscale mid-fidelity wireframes proved to have it’s downfalls — in hindsight, with an interface of this complexity, a lack of visual colour distinction maybe wasn’t the best testing choice to deliver the best learnings).
  4. Participants were more interested in a free trial rather than a ‘free hint’.
  5. On the first task, which was to ‘play the introductory video’, participants were extremely unclear on how to take this action. This was because the button icon to take this action was in the form of a question mark which was taken from the live build (this is what the live website was currently displaying).

“Tasks? I don’t like the sound of that. Sounds like a chore.” — User testing participant

Key iterations/actions undergone from the above key learnings

  1. We needed to drastically rethink how we presented these ‘tasks’. Making it abundantly clear that completing them will not only give them a reward and help them learn how to use the product but more crucially, potentially help produce the outcomes they wanted (e.g discovering unknown family members).
  2. Renamed ‘Tasks’ to ‘Achievements’.
  3. Added a ‘star icon’ to the reward button, giving visual emphasis and distinction from the tasks.
  4. Changed the reward from ‘free hint’ to ‘free trial’.
  5. Changed the button icon from a question mark to a play button. In addition, and to further address this kind of problem, we explored the idea of having temporary ‘lighter/darker’ visuals on the area where the current action was to be executed.
  6. Changed the shape of the task shape from tall/thin to wide/short to give less visual obstruction on user/tree interaction.
Mid-fidelity prototype #2 (key screens)

Prototype link: https://goo.gl/qA9zHT

The Outcome

Initially, our confidence was high but our assumptions on how our users would react to our ideas were off the mark. We grossly underestimated the sheer complexity of getting this type of experiment right in such a short time frame.

Overall, although enticed by the idea of a reward, users felt that this feature added an extra unnecessary layer of complexity to the experience — they wanted to be able to explore the product freely without being distracted by a strict task regiment.

This, we felt, didn’t necessarily mean that a task/reward feature was a complete write-off — only that for now at least, we had seen enough by this point to persuade us to pivot to a different direction where our confidence was higher in both scope and value to the user/business.

In the end, within just 2 weeks, we had identified/defined a problem, designed, prototyped and tested what would have been a complex and thus expensive feature to implement. Furthermore, we had generated a foundation of ideas and learnings for the backlog to be picked up for potential future experiment exploration — all without writing one line of code.

--

--