User Onboarding Experiment: UX Case Study for Findmypast

Ryan Connaughton
13 min readAug 15, 2018


Introductory Context

The product

Findmypast is a family history web-based product where you can primarily do three things:

  1. Search a collection of 8 billion+ records for your British & Irish ancestors
  2. Build your family tree which in turn generates ‘Hints’ of possible ancestor record matches
  3. Combine 1 & 2 to make discoveries and build a picture of who your ancestors were

Now, in the case of this experiment, we’ll be focusing on the user journey of number 2, building your family tree. In order to give you better context before going through our approach, here’s what the old journey looked like:

Old journey

The team

First off, I’d like to introduce you to the team I worked with on this experiment. We were 1 of 6 total product teams working cross-functionally to achieve overall continuous product improvement. We numbered a total of 8 — consisting of 1 Product Manager, 6 Software Engineers and myself.

I was charged with leading on the UX, user research and visual design (as well as occasional Product Owner duties).

The company and its team were working within an OKR framework which was popularised by Google and stands for ‘Objectives & Key Results’. The purpose of which is to define goals and track their outcomes in the context of the wider team and company mission.

Day to day and within the team itself, we tended to follow many Lean, Agile and User-centered Design principles.

The dream team (mid-retrospective exercise carried out weekly)

Defining the Problem(s), Objectives & Key Results

It was approaching a new quarter, and as per routine, it was time to define new OKR’s. To do that, we had to decide what the right problem was to solve for our users (user objectives v business objectives), coupled with a viable strategy on how to tackle it and how best to measure our success (key results).

In order to help us do that and as with all our pre-decision making, we first accumulated all existing previous data we had in our arsenal and/or carried out additional research we thought necessary but unavailable — a few examples of which I’ll be touching on in the following paragraphs.

Previous user research

Taken from vast amounts of previous user testing, we knew that:

  1. The tree was a complex product that was suited more towards intermediate users
  2. Beginner users felt overwhelmed due to high-educational and knowledge barrier to entry
  3. Beginner users wanted to be ‘hand-held’ throughout the experience

Competitor analysis

To further guide our decision making, and since such research was at the time non-existing, I thought it useful to get a better understanding of our competitors strengths and weaknesses and what worked for them, or didn't. I conducted and documented in-depth analysis which I then relayed onto to the rest of the team.

Competitor analysis: matrix (features & business model)

To begin the process, I spent some time on each competitor product across different platforms and mapped out the experiences in the form of page flows — weighing up the differences in terms of features and business models along the way.

Competitor analysis: spider/radar chart (themes and features)

I then conducted a heuristic evaluation of each competitor and grouped everything together in the form of several key themes. These were put into a spider/radar chart diagram where I ranked each from 0–5 (0 being non-existent and 5 being excellent).

It’s not an exact science in all cases and thus some themes can be subject to opinion, but I felt it would be helpful to have this extra layer of qualitative data to hand when we congregated at the decision table.

Ultimately though, this technique is a quick way to visually distinguish a strong and weak areas of our competitors in comparison to our own as well as identifying patterns and gaps. This could also be built upon and reused going forward for all future experiments.

Data science discovery

Furthermore, we had been working closely with a data scientist that had made a profound discovery — when users reached the point of adding ‘x’ or more people to their family tree, the likelihood of them signing up for a paid subscription increased dramatically.

Technical blockers

It’s worth mentioning at this point also, the tree product was built in legacy code — this meant that improvements to the tree itself was at best extremely difficult, expensive and time-consuming and at worst out of the question.

In addition, work had begun to build a newer version of the tree from the ground up by another in-house team.


With all the above taken into consideration, we concluded that we had strong indication on the direction we should go. The outcome of which can be read below:

Objective: Improve the tree onboarding experience for first time session users

Key Result #1: The Average number of family members added to each tree increases from ‘X’ to ‘Y’or more by new users within their first session
Key Result #2: Achieve a conversion rate increase of ‘X’, from ‘Y’ to ‘Z
Key Result #3: Achieve an NPS (Net Promoter Score) increase of ‘X’%, from ‘Y%’ to ‘Z%

Defining the Users

Before we jumped into ways we could achieve the above objective, It was important to reiterate to the team the types of users whom we would be designing for. Below was one of the pre-defined primary personas/user profiles we were working with:

Pre-defined User profile Canvas based off of extensive research (user pains/gains etc.)

Experience level capture

To help us validate our assumptions and inform us which type of user we should be focusing on when it came to designing solutions, we wanted to know exactly what percentage of our traffic were beginners who were signing up.

It had been unclear up until this point what the answer to that question was. It had been believed that the majority of users were intermediates.

In order to clear that question up once and for all, I advised the team we first conduct a quick experiment where we would ask the user(s) if they were a beginner, intermediate, or expert upon signing up which was agreed upon. Since this question alone could produce a subjective answer, we added an additional question which asked: “How long have you been researching your family history?” — this would help us further understand how they defined themselves in comparison to how we would define these 3 labels.

We later jumbled up these answers and re-tested in order to see if selection bias came into play. It did not.

Experience level capture (left: wireframes, right: live-build)

The results showed that in fact, the vast majority of new users classified themselves as a beginner with 0–1 month experience. This experiment was then rolled out to the rest of the product journeys were it was discovered that near identical results persisted.

Strategy & Planning

As with all our experiments, we created a strategy and planning board which was split into 3 simultaneous workflows represented by different post-it colours (discovery, delivery and cross-functional teams), assigning each item into one of the timeline columns and rows.

Planning board (pink: discovery, yellow: delivery, green: cross-functional team)

Solution Ideation Workshop

To begin the solution process, I gathered the team and other appropriate key participants into a meeting room where we conducted an ideation workshop where I had the user journey flows and OKR’s readily pinned up on the wall. Doing this together meant building a shared understanding, gaining buy-in and developing potential solutions from diverse perspectives.

User flows with generated ideas on post-its

Everyone had a good understanding of what we were trying to achieve by this point — but I reminded the group of the types of users we were designing for and the technical blockers/restrictions we had to ensure the discussions were on point and generated ideas were practical.

The workshop produced many great ideas and discussions which I grouped into themes and later used for inspiration.

Design & Testing (part 1): Sketches

Following the workshop, I began exploring some of the more popular ideas further by sketching out what they would look like and mapping out how they would fit into the overall user journey.

One of the better solutions I thought, was to have a visually clear step-by-step process which added family members one at a time before they reached the tree itself. This solved several problems we knew from our research:

  1. Overcame our technical blockers (the tree product itself was built in legacy code with a separate team working to build a newly-coded version from the ground-up)
  2. Overcame the lack of clear direction for the user with bite-sized next steps
  3. Jump started the high learning curve needed to build on and interact with the tree
  4. When completed, it meant the tree would be readily populated with additional family members, which in turn meant our system would generate more ‘hints’ from the get-go, making them see immediate value when they arrived on the tree
Early wireframed sketches

Internal user testing

Having then tested these sketches with a few colleagues, a few of them expressed their scepticism (and rightly so) on this particular direction, as I was informed that a similar experiment had failed previously which looked like the below:

Previous experiment

However, I felt strongly that this may not necessarily be down to the concept itself, but rather, the execution. With some healthy debate, we were all onboard to explore this idea further.

Design & Testing (part 2) : Mid-Fidelity Prototype

I then quickly created a simple greyscale clickable prototype which we (a user research specialist and myself) user tested with 5 external participants that matched our user profiles covered in the beginning.

Mid-fidelity wireframed prototype (key screen)

We tasked each participant to sign up and to think out loud, prompting questions where necessary to better understand their thought process.

Overall, the prototype appeared to resonate extremely well with our participants.

External user testing key learnings

  1. Participants wanted the ability to go back and edit
  2. It was not clear to participants what the ‘correct’ format of the ‘year of birth’ input was and how that might affect their result. It made them feel uneasy (despite the example text ‘1965’ that displayed the correct format)
  3. Users were unsure on which parts were compulsory and which parts were not and how that would affect their results going forward (only a single name was needed to move forward)

“Ah! What if I make a mistake? Can’t I go back and edit it? Would I do that from the browser menu?”— User Testing Participant

Key iterations/actions undergone from the above key learnings

  1. Introduced a back button (later new learnings meant we had to remove this due to technical constraints)
  2. Changed the ‘text entry box/field’ to a selective ‘drop down list/menu’ to remove any ambiguity
  3. Added a subtitle that read “Don’t worry if you don’t know everything, just fill in what you can…”

Design & Testing (part 3) : High-fidelity prototyping

With our confidence growing from positive feedback, I began work on a more in-detail and refined version. I explored several different styles, some of which you can see below:

Layout & styling exploration

“The amount of effort you put into your MVP (Minimal Viable Product) should be proportional to the amount of evidence you have that your idea is a good one” — Lean UX

High-fidelity prototype #1 key screens

As with all my designs, I made sure that I pinned them up at regular intervals for everyone to see. That way I could encourage frequent useful feedback throughout the process as and when colleagues felt ready to give it.

User flow pin-up (situated just outside of our team seating area)


One piece of useful feedback from the above designs told me that from previous user research, users tended to dislike ‘cutesy’ avatars as family history had a more serious connotation to it. This prompted me to design the below improved faceless avatars:

These I figured could also be used for marketing assets and throughout the rest of the product going forward as and when needed.

User testing key learnings

We did a further 2 rounds of testing both involving 5 participants each (totalling 15 so far).

  1. It wasn’t clear enough which family members details they were asking to be filled in on each step
  2. Mobile: The skip link and call to actions were hidden away on smaller resolutions underneath the fold (the immediate viewable area without the need for scrolling)
  3. Participants tended not to read the subtitle we added at the top of the page which read “Don’t worry if you don’t know everything, just fill in what you can…” This again meant users weren’t sure about which fields were mandatory and which were not
  4. Making the previous decision to introduce the ‘dropdown list/menu’ for the ‘year of birth’ presented it’s own problems — it meant that participants were having to scroll for some time in order to select the right date which they found tedious, particularly when taking into consideration that they were potentially going to do this up to 7 times

“I don’t know my Grandfathers details. Is it not possible to skip this step?” — Mobile user testing participant

Prototype #2 keyscreens

Key iterations/actions undergone from the above key learnings

  1. We experimented with having the avatar boxes ‘slide’ horizontally as to have it always be centred in the middle on the relevant family member (ultimately though this presented some technical challenges so we decided to settle on having a clear bold title within the form accompanied by a mini icon replicating the family members avatar).
  2. Mobile: Added a permanently visible ‘next/skip’ footer navigation
  3. We added a clear bold title within the form itself, where users eyes tended to be drawn (as opposed to the top of the page)
  4. We devised a way to only have possible/relevant date ranges depending on how old the user was— this made scrolling to the right date much more efficiently on all following family member forms

Build Execution & Release Plan

For storytelling purposes and readability, I thought it best to explain things linearly as structured above. In reality, we had more than enough evidence sometime prior to this point that our idea was good enough to warrant testing this for real in the form of A/B testing (a method used to split user traffic 50%/50% to compare its performance against the current live build).

This meant we had two separate simultaneous workstreams with discovery and design work being 1–2 weeks ahead of what was being built into the live product.

This afforded us improved efficiency and two separate sources of continuous learning (qualitative user testing and quantitative metrics from the live website) that we could use to make evidence-based improvements as and when they came in.

Release plan

Since this wasn’t the only experiment we were testing at the time, we first had to come up with a test/release plan in order to see how this would fit in with the other ongoing work.

Team roadmap

First, as a team, we estimated each items complexity (using Agile’s ‘t-shirt sizing’ method). We then grouped and prioritised each into ‘release segments’ — containing time until completion estimates and structuring it in a way where we thought we could make the most positive impact to the user and business in the shortest amount of time.

A user story refinement session with the team — using Trello with a kanban structure

Shipping it

We then began breaking down release #1 into Trello tickets in the form of epics and user stories to be built into real code.

We tracked how our users engaged with our experiment on Google Analytics and the results were promising. This gave us the green light for us to plough on and execute on further iterated releases we had waiting in the pipeline.

The Outcome

We managed to hit our overall objective and 2 out of 3 of our key result goals. Results showed we had upped user engagement by 150% and increased conversion (paid subscriptions) by 6%.

Through the remainder of the quarter, we carried out several more related successful experiments which increased engagement a further 150% totalling 300%+, including integrating this experiment into different user journeys throughout the product.

150% increase in user engagement metrics

Learnings summary

All said and done, we learned that beginner users really liked clear guided bite-sized onboarding steps to help overcome the trees overwhelming nature and usability complexity (to which we were powerless to improve due to technical constraints).

This helped kick start their journey and in turn increased the chances of them attaining the outcomes they wanted sooner, without the daunting task of starting from scratch (an empty tree).

In closing, the biggest key take away I think, was that there is a fine line between what is minimally viable for the user and what is not — and with that, and although a difficult judgment call, we should always approach with caution when deciding to waive an experiment too early before they have had a chance to be fully fleshed out.

Although the previous ‘add parents’ experiment had failed, the concept itself turned out to be solid — it just needed a different approach.

Live demonstration video

Old journey

Watch my verbally presented demo on another experiment: Click here

New journey live demonstration (all improvements made over 2-3 quarters)

Live link: