How I put Lean UX to the test

Lean is the new black. But what is it really and how does it affect UXers? If you’re one of the lucky ones, you’re on a product team with solid hand-off from product management to design to development, and it seems like a smoothly running agile machine. You may even shine on a design team that applies Design Thinking principles to get to a great experience.

After such growth as a designer, you may still be wondering a lot about the “why.” Where do these requirements come from anyway? And why does it seemingly come from a black box?

When I came across the Lean UX book, a short 124 page read by Jeff Gothelf, I gobbled it up. It’s not the first time I’d heard of Lean. Yet reading how important UXers like myself are in the process, I had to learn more about the lean strategy by trying to do it myself. I put it to the test with my new project Pimlio (

While figuring out how to organize the effort, I realized that I was really only referring to 14 pages (p 17–31) of the book. The rest of the book is background, motivation, or outdated education on design (skip these parts, seriously; the style guide section may anger you).

At the end of the day, the moral of this book is simple:

Lean is a process to navigate risky unknowns in a problem space to get to a plan that best utilizes your resources. UX designers are one of those resources.

The lean process he describes in a mere 14 pages is as follows:

  1. Get the multidisciplinary leaders together to find highly unknown, risky assumptions around a project.
  2. Transform those assumptions into testable hypotheses the team can quickly validate (or not) before too many resources are wasted.

The “UX” in Lean UX is that many hypotheses can be tested with “MVPs” that are as simple as a launch page with sign ups, a user evaluation, customer interviews, or A|B tests — none of which need a released product to test. Hence, you may not need a developer for your MVPs!

After trying this process out with my team, I believe the book should be titled “Lean UX: How User Experience Designers can help test MVPs.” This is because MVPs don’t necessarily get you an “Improved User Experience” as the title implies, but rather they enable you to test risky uncertainties early (that’s the Lean part) before you stick designers on creating a great experience (the UX part).

So, how did I run Lean UX?

When I sat down to try what the author outlines in those 14 pages, I realized that two things needed to occur:

  1. Getting the open-minded organization to apply this strategy at all (a shift in mindset).
  2. Leadership includes UX in such strategy exercises (we are well equipped — and often underutilized — for strategy).

I created workshops where we could collaborate until our brain hurt (no more than 2 hours), then pick up again the next day. You may or may not be able to have an in person pow-wow. If not, it turns out doing this remotely with paper & pen, screen shares, virtual sticky notes, and Google Docs can get you pretty far — that’s what we did.

My Lean UX workshops for

The first step is to set the scope with a problem statement. The author provides a template and examples. The key here is to set the problem space, the high level perspective for the team to work under and nothing more. Don’t go down a rabbit hole with a long-winded statement filled with Key Performance Indicators (KPIs). You’ll get to the details later. I recommend drafting one ahead of time to kick-off the workshops. Here’s what we used on Pimlio:

Pimlio should be designed to allow interaction designers to beautifully showcase their work. We have observed that the market isn’t meeting these goals, which is leaving interaction designers underexposed and less discoverable by the job market. How might we build Pimlio so that designers are successful at showcasing their work in an online portfolio?

Once you have the problem statement, it’s time to pull all the assumptions you’re making out of your brains. The worksheet he provides helps to get started. We chose to give everyone a limit of 15 minutes to simultaneously scribble their answers (assumptions) on paper before the chaos of collaboration crowded out any ideas. Question those fundamentals you thought to be fact, assumptions such as “they want it at all,” “they will pay more for x if y is free,” or “more will use it if.” After a timer rings, take turns sharing while someone transcribes the ideas onto sticky notes (one assumption per note). For example, on Pimlio we came up with

Assumption: Designers want a stunning portfolio without having to code it or use heavy design tools.

My favorite part is the prioritization matrix. Before this point, no idea was a bad idea. Get it all out! Now it’s time to narrow down to the risky assumptions we have no idea are true or not.

Pimlio’s first Lean Prioritization Matrix (blurred for privacy)

Sort assumptions based on how certain you are of them, and how risky they are. What does risky mean? The book guides us to go by How bad would it be if you were wrong? Now,… go! Triage scrupulously along his 4 quadrant matrix. If the assumption makes it to the top right, we test those risky, unknown assumptions first with the remainder of the lean process. If they end up elsewhere, it doesn’t mean they aren’t important. It just means we are more confident in them and they aren’t risky to be wrong about (e.g. cheap to undo). So park them for reference later in case you don’t pivot after all.

The best part about this collaboration, is that all disciplines are able to get through their perspectives, triage them together, and end on a shared strategy.

Take a breath, because the hardest part is over! Have someone record those assumptions for the next workshop where you simply reword them into hypotheses statements. We know x statement is right / wrong when y outcome. For example, following our assumption above, our hypotheses became

We believe designers don’t want to spend a lot of time polishing their designs in Illustrator or Photoshop before adding them to their portfolios. We will know we are wrong when we see users uploading pre-annotated or pre-cropped images, or images that are essentially blocks of text.

Doing this step together is optional, I think. At first it’s good to do it together to get everyone practiced at it. But over time, or in the interest of time, you may choose to let a veteran transcribe assumptions into hypotheses statements on their own, then then regroup to tackle the brainstorm in the next collaborative workshop.

You may notice my workshops don’t follow his suggested process exactly. We struggled with the order the author suggested for breaking down the outcomes then brainstorming “features,” especially with less examples in the book at this point. He dropped the ball making this clear. We decided we couldn’t really break down our hypotheses without brainstorming how to test the outcomes first. So, I swapped his order. With our hypotheses statements in hand, we brainstormed how can we best test the outcomes? For example, in our brainstorm we came up with the following ideas:

Pimlio brainstorm on how to test a hypothesis

We gave each other two votes to pin what we thought would be the best way to test the outcome, and settled on no more than three (usually only one or two) ideas. This allowed us to break the hypotheses into measurable sub-hypotheses from there: We believe that [doing this] for [x person] will achieve [y outcome]. We know this is true when [measurable KPI from brainstorm]. Ours became

We believe that
running a user evaluation of the image upload experience
for beta signups
will allow them to discover they prefer to annotate in Pimlio.
We know this is true when we more than half of participants show delight after realizing they can annotate their image in Pimlio.
We believe that
telling people more about Pimlio annotations on the website
for everyone
will allow them to show interest in not having to annotate themselves.
We know this is true when 25% of website visitors go to that page.

His guidance on these last steps is extremely weak, so I continued to modify the process. We grouped the testable sub-hypotheses into themes we could work on together (e.g. launch page, user evaluations, interviews, networking, etc.). We then sorted those themes by the logical order in which we wanted to tackle them. For example, we wanted to run user evaluations before interviews so that we don’t taint the studies. While referencing our themes in order, we roughly placed our activities into a schedule.

Pimlio’s first plan to test Lean hypotheses

We assigned an owner to facilitate the work, gave approximate deadlines, and our plan was born. Yippee! We collectively came to this plan, and felt great about it. On Pimlio, we decided to do this work in parallel while building our Beta. Many of the tests go hand in hand with what we want to build anyway. But now, we’ll either be confident by launch, or we’ll fail early, pivot, and pick ourselves back up.

Let’s say we do this. How often should we?

Any time you want to create something new, run Lean. Pivot if you have to. If you don’t, keep building. This means a new company, a new R&D project, or a new release (yup, that often). Literally, when it’s time to build something new.

When we ask product managers to deliver “requirements,” they can up with anything vague, that stifles creativity or too concrete. Everything and anything can be deemed a requirement. Instead they should put problems to solve on the table. Work with your product managers to produce well-written problem statements (you need this to get designers started anyway). Then after prioritization assumptions, you’ll collectively know where to run lean.

Sometimes there are a lot of risky unknowns, and sometimes there aren’t very many. When the market is in a stable period and you have confidence in what to do next, this process will leave you with fewer assumptions to test. But when assumptions come out that are risky unknowns, you’ll be glad you took the effort to evaluate them.

Don’t just read the book — do it too! Don’t have buy-in from your whole organization? Pretend like it’s your responsibility anyway, and show the greater team what you accomplished. They’ll want in on it too, and you can adjust the scope of your problem statements as they do.

The Good

This book gets you motivated to run Lean. It’s relatable to designers, and makes us feel like we are a part of it. It’s also practical in that it outlines the process itself so that we can follow it.

On Pimlio, we felt like we won a race — exhausted yet energized after executing these workshops. We didn’t just have a plan anymore. We had a smarter plan. Before going lean, we had what we thought was a small MVP, and while our scope was nice and tight, it wasn’t as smart. With concrete goals and KPIs to test, now our whole team can help collect those metrics while we work.

The bad

The author skims over the most useful part, the lean process. More examples and case studies would make this book more practical indeed. He does not show end-to-end examples. And worst of all, he does not provide an example of his final output, the “hypotheses table.” It was an incredibly useful 14 pages, but alas it could have used at least 2 more.

Also, I just can’t end without saying I wholeheartedly disagree with him on the importance of a style guide that goes beyond visual design. Not only is he contradicting having pizza-sized teams (by advocating designs that can be consumed by thousands of developers), but it’s time wasted for the valuable UX resource (Oh, what I could do with the time I lost as a young designer maintaining guidelines. Bleh!). Amazing experiences break such “rules” all the time. Great designs don’t focus on consistency of the controls. I’ve seen enabled buttons be gray (traditionally a usability no-no) and text fields vary in size, all while rocking a great experience. It’s time we stop paying hommage to to the false gods of consistency.

Bookclub Discussion

Have you tried running lean with your team?

How did you end up doing it?

Did you modify the process?