Continuous improvement in UX design

Have you ever wondered on the things you’ve designed, has they (really) met their goals? Is your design serving its purpose well, is it working the best it can be, and can it be further optimized?

Start with data

Ouch! So I get it… I understand that very headache when you hear the word “data”. It can be super duper complicated, there’s certainly no doubt about it.

Well, that’s not a real excuse to design solely based on gut feel, guesswork or your unexplainable artistic flair. It’s simply no difference from flying blind!

Without knowing the trajectory, luck is the only factor preventing a crash

Through the years of designing both physical and non-physical products, I’ve found that the (UX) designing process is a never ending story; where designers swear by continuously improving and optimizing the lives of others.

So ask yourself this. For a given design release, how do you know which part(s) of the UX has improved? Would you know if the results of your design have regressed, or perhaps turned for worse?

If you have not done so already, start by setting a baseline to benchmark subsequent work. And what better ways to do this than using data? Let’s take a look at how to do this…

The improvement cycle

The way I strive to (continuously) improve my designs is by leveraging and adapting from DMAIC, a tool frequently used in Six Sigma projects when I started out my career initially as a mechanical engineer.

Modified DMAIC flow

DMAIC stands for Define, Measure, Analyze, Improve and Control.

All the phases are required in the order the abbreviation is defined. However in the areas of my UX work, I have never found a need to instrument the last ‘Control’ phase, and thus excluded it in my process.


Let’s start with define, which is probably the most important phase. In the scope of UX design, this is really about understanding and setting goals whereby we define:

  • A problem (that our design has to solve)
  • The intended users
  • The requirements (or voice of the customer)
  • Design scope
  • Success criterions

Make a list of these definitions. Better yet, make a poster and pin them up somewhere around your work area. It will benefit not only yourself; but your clients, stakeholders and even your development folks to understand the team’s focus.


The second phase is to objectively establish current measures as the baseline for improvement.

You may ask “what the heck are measures?”. Measures have to be tied to the goals set in the Define step. Depending on your definitions, it could be as simple as conversion rates on a single screen, or it could comprise of several interactions spanning multiple touch-points across the entire customer journey. My advice? “Start small, start discrete.”

Another word of advice, do spend more effort into accessing the suitability of these measures. One thing I know with playing with measures and data is: “Garbage in, garbage out”.

Once you are done with the baseline, repeat the measures for all your unmeasured iterations, and then compare the performance of each iteration to determine objectively whether an improvement has been made.

e.g. Comparing users willingness to submit form
Do not wait a production release to get your iteration measures !
The earlier you start measuring, the earlier you can react to the results.
Start early by measuring through prototypes or simply just A/B test them live.


Now that you have data, you can identify the good and bad elements. These could be interface objects, parts of a journey, etc. I normally start this by asking a few questions:

  • Which element works well for the user, and which doesn’t?
  • Is there a discrete part of the element that is causing its ‘goodness’ or ‘badness’?
  • Are the elements ‘bad’ because of their context? Would they turn ‘good’ somewhere else?
  • What should we keep and ditch?

Make a table, note down the observations and trends and assign actions for these elements. This table will help inform you in further design iterations.


We now know more about our design from the Analyze phase. If required, we can start whip out next iterations and repeat the measurements again.

In the course of doing so, you may uncover new problems and other interesting findings. Make a journal of these findings; stop repeating your (design) sins and implement the good practices from this as well previous continuous improvement exercise(s).

Once you get to satisfactory results, deploy them. Hooray! Give yourself a pat in the back.

But don’t celebrate too long, “with a round of improvement exercise done, indicates the next one is coming around the corner”.


  1. Good design requires a continuous improvement process. Live with it!
  2. For the gutsy designers, please use data which is probably more reliable than your guts.
  3. (Unlike process and data), artistic flair is unexplainable and hard to fathom; it is also not sustainable, predictable or repeatable.
  4. Let’s be objective about our design, personal feelings don’t really count.
  5. For those who question about statistical significance of data . Always remember “some data is better than no data”.
    So what if the data is garbage? At least you will know it’s garbage at some point rather than to be totally oblivious to it, right?
Show your support

Clapping shows how much you appreciated Alan Ho’s story.