What if Your Web Design Statistically Sucks?

Matthew Cook
Jun 29, 2017 · 6 min read
Image for post
Image for post

Design is a provable or disprovable hypothesis. Approaching it as such puts the onus on clients to discuss my team’s work in the context of its fitness for a purpose. I’m interested in knee-jerk reactions, but only informed decisions should hold sway over whether a design decision is correct. Design that is trendy or personally appealing to a manager shouldn’t be the sole determining success factor.

One of the toughest things to handle during a design review is negative reactions. It’s even harder when the critique comes with testing results — especially if your client or organization is particularly fond of approving design based on data. If you’re familiar with this problem, you’ve probably completed design, lobbed it over the fence to somebody, and then attended a testing results meeting a week later. After some talk about relevant sample sizes, standard deviations, test methodologies, and control groups, you find out what you really came for: how did it do? If your designs are successful, they’re implemented or you get to move on to the next phase of the project. But how do you handle it when your designs statistically suck?

Image for post
Image for post
Word association test results

This is a page from a results deck I received recently. In this case, new work is being benchmarked against past work over a set of criteria: association with key words. These designs actually scored pretty well. My assertion is that we scored well because there was alignment between the metrics for testing and the intent for the designs. Everyone understood what we were shooting for. While there were definitely nervous feelings ahead of time, we weren’t surprised to see mostly successful results.

There are ingenious tests and methodologies to measure the effectiveness of pretty much any aspect of a design. The strain between testing and design rarely comes from the test method or how the test was run. Instead, the strain often comes from the misalignment of design intent and the criteria being tested against.

As a designer, you don’t want to step into the shoes of your counterparts who specialize in testing. Understanding and selecting testing methods, building tests, then effectively running the test with appropriate controls is their expertise. Where you do want to work with your testing counterparts is in understanding exactly what about your designs will be tested and what the desired outcome is.

If test results are a chief decision-making tool, your designs should be created to have the best chance of exceeding that test’s criteria.

You can probably think of well-designed products and websites that would test phenomenally well on criteria we’re familiar with but that were eaten alive by competition (Rdio, anyone?). In those cases, design that tests well still failed. This brings us to the big questions behind how to handle design, opportunity, and testing:

What is the role of creativity in relationship to data and opportunity? Can opportunity be defined by testing? Can creativity be tested against opportunity?

Image for post
Image for post
Creativity inside the box

Opportunity is identified and then well defined. Creativity is used to pursue specific opportunities within the bounds of established criteria and past testing results. Finally, any design hypotheses presented will be tested in comparison to the understood opportunity.

I once had a professor who used to say something like this:

No employer wants to hire you to change everything. They want to hire you to sit in your box, get to know your box, understand your box; then after 2–3 years, you can think outside of the box.

He was a finance professor. His point was that no new hire in finance had any business getting creative with the numbers. He’s right. This thinking is also often applied to web design within large businesses with a long legacy. They’ve gotten really good at making boxes as a way to maintain a competitive advantage. Corporatized web design boils down to methods of control. When many high level managers have decision-making power, corporatized design is necessary to create consensus and coordinate efforts to a singular goal. It’s also cost effective since goals are clear and the desired design output can be very specific. The danger here is that corporatized design isn’t great for innovation. If a business stops innovating, it starts falling behind competition.

Juxtaposed to corporatized design is a view of employing creativity that is fundamentally different. We could call it interpretive web design. Here, creativity is a way to pursue many potential opportunities within a broad realm of what the business is willing to consider. At the center of that broad realm is a core of data and assertions that serve as a foundation to build from. It looks something like this:

Image for post
Image for post
Creativity outside the box

Interpretive web design is controlled, but exploratory. Instead of being inside a box, it’s intentionally outside of all boxes. Compared to corporatized web design, it’s a vehicle used to find new competitive advantages, users, and markets. It needs to be free of the multitudes of small constraints necessary to tightly control and refine a specific opportunity. Instead, it needs foundational data and assertions that it can build on in pursuit of something different than what’s been done before. Interpretive design is risky. Since the definition of opportunity is broad, you may discover it’s not worth pursuing after you’ve already invested a large amount of time and effort. However, it can also be extremely profitable. If you are pursuing something that you haven’t been doing before, positive results tend to yield new revenue streams, user bases, or products.

So, let’s bring this full circle. Corporatized and Interpretive web design aren’t mutually exclusive. In fact, many businesses do both. Whether or not they’re each done successfully often boils down to how they’re understood, managed, and tested.

If a company asks for interpretive design, but manages and tests it like corporatized design, the designs will elicit fear, skepticism, and retrenchment.

If a company asks for corporatized design, but expects the type of yield and unbounded thinking that interpretive design offers, the designs will elicit boredom, frustration, and deficiency.

Designs that test poorly aren’t always bad. Sure, it’s a possibility that your design sucks. It’s also possible that the wrong things were tested — that the design is successful at the wrong things. The way to fix this is to get ahead of it. Lead the conversation about desired outcomes, the opportunity your project is attempting to capture, and how your work will be critiqued. Understand whether you need to sit inside a box or if you’re being asked to pursue a wild idea. Ensure that the expectations for your design are reasonable given what’s being asked of you. The best way to ensure your design will test positively is to design with the test in mind. The best way to do the right design is to understand why it’s needed.

Thanks to Matthew Smith

Matthew Cook

Written by

Independent Producer and Director, Partner at @atlaslocal. Product-guy-partner at @reallygoodemail. I write about the business of design. www.mattecook.com

Matthew Cook

Written by

Independent Producer and Director, Partner at @atlaslocal. Product-guy-partner at @reallygoodemail. I write about the business of design. www.mattecook.com

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store