Design is a provable or disprovable hypothesis. Approaching it as such puts the onus on clients to discuss my team’s work in the context of its fitness for a purpose. I’m interested in knee-jerk reactions, but only informed decisions should hold sway over whether a design decision is correct. Design that is trendy or personally appealing to a manager shouldn’t be the sole determining success factor.
One of the toughest things to handle during a design review is negative reactions. It’s even harder when the critique comes with testing results — especially if your client or organization is particularly fond of approving design based on data. If you’re familiar with this problem, you’ve probably completed design, lobbed it over the fence to somebody, and then attended a testing results meeting a week later. After some talk about relevant sample sizes, standard deviations, test methodologies, and control groups, you find out what you really came for: how did it do? If your designs are successful, they’re implemented or you get to move on to the next phase of the project. But how do you handle it when your designs statistically suck?
This is a page from a results deck I received recently. In this case, new work is being benchmarked against past work over a set of criteria: association with key words. These designs actually scored pretty well. My assertion is that we scored well because there was alignment between the metrics for testing and the intent for the designs. Everyone understood what we were shooting for. While there were definitely nervous feelings ahead of time, we weren’t surprised to see mostly successful results.
There are ingenious tests and methodologies to measure the effectiveness of pretty much any aspect of a design. The strain between testing and design rarely comes from the test method or how the test was run. Instead, the strain often comes from the misalignment of design intent and the criteria being tested against.
As a designer, you don’t want to step into the shoes of your counterparts who specialize in testing. Understanding and selecting testing methods, building tests, then effectively running the test with appropriate controls is their expertise. Where you do want to work with your testing counterparts is in understanding exactly what about your designs will be tested and what the desired outcome is.
If test results are a chief decision-making tool, your designs should be created to have the best chance of exceeding that test’s criteria.
I imagine some of you think this is a good insight, and some of you silently noped inside your head. If you disagree with that last statement, it’s probably because you disagree with the relationship between creativity and data that it assumes. Let’s face it: Most teams and individuals creating web designs don’t get to decide how data is used in relationship to design creativity. That’s usually a decision made by corporate leadership that is then hardwired into an organization. Designers’ job is to understand the relationship that exists, and thrive within it.
You can probably think of well-designed products and websites that would test phenomenally well on criteria we’re familiar with but that were eaten alive by competition (Rdio, anyone?). In those cases, design that tests well still failed. This brings us to the big questions behind how to handle design, opportunity, and testing:
What is the role of creativity in relationship to data and opportunity? Can opportunity be defined by testing? Can creativity be tested against opportunity?
Up until now, we’ve been assuming an environment where the purpose for design and testing is already established: it’s tightly synced with business knowledge around market segments, target users, and other expensive and highly skilled insights. Design creativity in this environment is usually employed as a tool to pursue a known advantage specific to that company. We could call this corporatized web design. Design that has been harnessed and bent to the specific context of a business. All the creativity a designer can employ must serve the business cases at hand, and must maintain what makes that business continually successful. Most likely, the business approaches the relationship between data, creativity, and opportunity like this:
Opportunity is identified and then well defined. Creativity is used to pursue specific opportunities within the bounds of established criteria and past testing results. Finally, any design hypotheses presented will be tested in comparison to the understood opportunity.
I once had a professor who used to say something like this:
No employer wants to hire you to change everything. They want to hire you to sit in your box, get to know your box, understand your box; then after 2–3 years, you can think outside of the box.
He was a finance professor. His point was that no new hire in finance had any business getting creative with the numbers. He’s right. This thinking is also often applied to web design within large businesses with a long legacy. They’ve gotten really good at making boxes as a way to maintain a competitive advantage. Corporatized web design boils down to methods of control. When many high level managers have decision-making power, corporatized design is necessary to create consensus and coordinate efforts to a singular goal. It’s also cost effective since goals are clear and the desired design output can be very specific. The danger here is that corporatized design isn’t great for innovation. If a business stops innovating, it starts falling behind competition.
Juxtaposed to corporatized design is a view of employing creativity that is fundamentally different. We could call it interpretive web design. Here, creativity is a way to pursue many potential opportunities within a broad realm of what the business is willing to consider. At the center of that broad realm is a core of data and assertions that serve as a foundation to build from. It looks something like this:
Interpretive web design is controlled, but exploratory. Instead of being inside a box, it’s intentionally outside of all boxes. Compared to corporatized web design, it’s a vehicle used to find new competitive advantages, users, and markets. It needs to be free of the multitudes of small constraints necessary to tightly control and refine a specific opportunity. Instead, it needs foundational data and assertions that it can build on in pursuit of something different than what’s been done before. Interpretive design is risky. Since the definition of opportunity is broad, you may discover it’s not worth pursuing after you’ve already invested a large amount of time and effort. However, it can also be extremely profitable. If you are pursuing something that you haven’t been doing before, positive results tend to yield new revenue streams, user bases, or products.
So, let’s bring this full circle. Corporatized and Interpretive web design aren’t mutually exclusive. In fact, many businesses do both. Whether or not they’re each done successfully often boils down to how they’re understood, managed, and tested.
If a company asks for interpretive design, but manages and tests it like corporatized design, the designs will elicit fear, skepticism, and retrenchment.
The reciprocal could be said:
If a company asks for corporatized design, but expects the type of yield and unbounded thinking that interpretive design offers, the designs will elicit boredom, frustration, and deficiency.
As a designer, you need to understand the intended yield or value of your design. We often lose sight of this while designing for the user, current trends, or specific business cases.
Designs that test poorly aren’t always bad. Sure, it’s a possibility that your design sucks. It’s also possible that the wrong things were tested — that the design is successful at the wrong things. The way to fix this is to get ahead of it. Lead the conversation about desired outcomes, the opportunity your project is attempting to capture, and how your work will be critiqued. Understand whether you need to sit inside a box or if you’re being asked to pursue a wild idea. Ensure that the expectations for your design are reasonable given what’s being asked of you. The best way to ensure your design will test positively is to design with the test in mind. The best way to do the right design is to understand why it’s needed.