Using Hypothesis Driven Design to Improve your Digital Products and Services
As part of London Tech week 2016, I spoke about Hypothesis Driven Design at an event hosted by Forge&Co. Below is a summary of the things we shared
The talk touched on how to form and write design hypothesis, how they can help focus teams and how you can use different types of user research to deliver better digital products.
We believe user research and evidence based design is fundamental to creating compelling digital products. Which is why everything we do is guided by these three principles:
- Informed thinking
Everything we do is backed up by evidence
- Holistic vision
We always look at a product or service as a whole from the moment someone becomes aware of the offering through to them becoming advocates
- Collaborative design
We believe that design is better done together. We encourage co-creation, joint ownership and collaboration throughout all stages of development
We endeavour to start every project with a ‘Research sprint’ to kick off our holistic thinking. In each research sprint, we start to form early assumptions and insights about the challenges and opportunities the product or service is likely to have.
During this first sprint we use lots of different research methods to form our assumptions. All the findings and research helps to inform the objectives of our design sprints. During design sprints we create, sketch, prototype and carry out research with users.
As a result of this user research we validate and then iterate on our designs and constantly refine our understanding.
What is a Design Hypothesis?
A Design Hypothesis, is basically an assumption. Something that someone believes to be true. A Hypothesis helps you to prove or disprove the assumption. They are either proven or disproven using research and experiments.
The results of these experiments tell you whether you are really understanding your user’s behaviour and how accurately you understand the potential or the pitfalls of your concept.
Every hypothesis that is tested has the potential to generate new insight for future rounds of your product’s development. Which is why we believe forming them based on research and evidence is fundamental to customer centric design.
Hypothesis Driven Design and your team
A great aspect of hypothesis driven design is how it can improve a team’s dynamic and collaborative working. Teams by their very nature are full of people who think in different ways.
Each team member, from developers through to stakeholders, want different things from a project, and throughout a project’s lifecycle you are bound to have people who agreed with an approach or direction at the start who later change their mind. This can cause conflict at all levels as consensus on difficult decisions are hard to reach. At times, design decisions are made either on a hunch or to keep the peace.
These difficult to reach decisions can benefit the most when turned into design hypotheses.
Here are 5 ways Hypothesis Driven Design can benefit you:
- Get team buy in
Everyone can be involved in writing hypotheses, so everyone feels ownership and involvement. This means ideas do not need to be sold in or justified later on as everyone is aligned and on the same page.
- Less paperwork
Because you are bringing everyone along on the journey with you, you’ll need to create less documentation for people to review.
- Informed Prioritised Roadmap
Using hypothesis driven design means your roadmap will be formed based on evidence.
- Constant learning
Even after a new feature has been released you can use your assumptions to run further research and discover new insights. Issues with a design that would be difficult to spot using your ‘design eye’ alone are easy to find.
- Features user want
Hypothesis driven design means users get features that are fit for purpose and solve real needs.
Writing a design hypothesis
To write a design hypothesis you start with a simple statement — you put your assumptions into a structure. There are lots of different structures, but we like this one.
The first part ‘We believe that…’ is where you put your informed guess of a user’s behaviour.
The next part ‘So if we…’ is an action, something we want the user to do or think the user is going to do.
‘Then we will see…’ is where your expected result or a measure of success is entered.
Creating your first pass hypotheses is simple and there are a number of methods to help you form these:
Proving or disproving your hypothesis and choosing the right test criteria
Next, is choosing the most appropriate test for your hypothesis.
But first, an analogy…the Millennium bridge in London was originally opened in 2000 and in a single day 90,000 people crossed it, with 2000 people walking across it at any one time.
As more people started to cross the bridge, it started to dramatically sway. This motion became worse as everyone started in unison to counter-balance themselves against the direction of the sway. It’s the same science behind why soldiers break step when crossing a bridge (see the Broughton Suspension Bridge).
It later transpired that the engineers behind the design of the Millennium bridge imagined and tested for only 160 people walking across it at any one point in time.
I think this is a great example of where setting the right test criteria and running appropriate experiments for your hypothesis is so important and can ultimately cost you a lot of money (it cost another £5million to fix) if not done correctly.
Qualitative & quantitative
The first important step is deciding what type of feedback you need to prove or disprove your hypothesis.
If you’re testing a new concept, and your measure for success is related to people’s reaction, you might choose to run some Qualitative research. Quantitative methods of research are perfect if you need feedback tied to measurable outcomes, for example if your hypothesis is related to a sign-up process or an e-commerce flow.
Here are some methods we use to get qualitative feedback on the assumptions made throughout our design sprints:
- Face to face interviews
These are our favourite way to test prototypes of various levels of fidelity. We usually use Invision, Marvel or Proto.io to create our prototypes.
- Guerrilla testing
This is the cheapest way to get feedback as we go, we usually pop to a nearby coffee shop to ask people about their opinions in exchange for a free coffee.
- Online testing uses tools like validately.com or usertesting.com, it’s a great way to get feedback fast
Here are some methods we use to get quantitative feedback on the assumptions made throughout our design sprints:
- Card sorting
This helps us form app or site structures and discuss labelling with users.
- A/B tests
To test wording, colours and design tweaks.
- Click Flow tests
Examine whether people understand processes such as a sign up flow or purchase funnel.
- 5 second tests
Using usabilityhub.com gets a user’s initial reaction to page or piece of functionality.
To find out more about when to use which methodology, here is a great article about When to Use Which User-Experience Research Methods by Christian Rohrer.
Refining your hypotheses
As your findings grow, your hypotheses statements can be refined, and more detail can be added. A great place to do this is on a wall in your workplace. Get your hypotheses onto a wall and let everyone see what you’re up to, especially as the findings from your experiments start to roll in.
After one, two or more experiments you should have enough confidence to turn your idea into a user story for the development team.
1. Keep it Lean
Don’t get bogged down with too much admin and paperwork, try to keep your experiments and documentation as light and as fast as possible, this way you can quickly react and pivot if something doesn’t turn out as you hoped. This means, you don’t have to update all your documentation if a disproven hypothesis means something changes. Just replace the card on the wall.
2. Don’t Stifle Creativity
Most data, whether analytics, survey data, or customer service data, is backward-looking. We can discover trends but it is not as easy to make predictions based on those discoveries.
Looking at data is great for design tweaks but it’s not so great for creating an amazing experience. So ensure your process remains design led, if it becomes too data focused you could lose the magic or sparkle of the product.
3. Theres no excuse
We know that users don’t always know what they want, but a product that has had user research is so easy to spot over one that hasn’t. We’ve all used bad products.
Experimenting and running tests, costs hardly anything, so even if you only do it for your own learning and career development, you need to start understanding how people think, otherwise you are just guessing.
Guerrilla testing for example can be free or almost free, so get yourself down to your nearest coffee shop tomorrow and start running your own experiments — you might even have fun!
Steve Johnson is managing partner and UX Director at Furthermore. Feel free to get in touch with us at email@example.com if you need help defining and envisioning your ideas or your product or service design strategy and implementation.
Furthermore are a multi-platform digital product and service design studio based in London. User experience specialists and design led, we have one mission: to create innovative digital products that stand out in the landscape, are beautiful, purposeful and a delight for the user. User research is at our core and we believe good ideas can come at any point in a project, so we utilise agile methodologies. Hypotheses are always tested using prototypes and real users, with improvements being constantly fed back into our user experience and visual designs.