Navigating the Complexity of Design through Alignment and Feasibility

Two of the biggest troublemakers that can impact your product design project are a lack of team alignment and a lack of awareness of feasibility. In this article, I will reference these two scoundrels as “The What” and “The How”.

  • The What: a lack of stakeholder alignment on the requirements of the initiative.
  • The How: a skewed understanding of what it would require to execute an initiative. This typically involves development execution but could include design, and content requirements.

Having a keen understanding of your certainty on both “The What” and “The How” will help you to better prepare activities that will reduce your chances of being knocked off course by inevitable setbacks.

Introduction to The Stacey Matrix

As it turns out, I am not the only one that has been affected by a lack of understanding of “The What” and “The How”. Ralph D. Stacey developed a the Stacey Matrix, a model for approaching complex situations in management settings. This model plots certainty on the course of action that needs to be taken on the horizontal axis (“The How”) and agreement on the vertical axis (“The What”). Depending on where on the matrix your problem lands will help to inform the type of intervention that might benefit you the most.

With this approach, Stacey suggests a variety of decision-making processes:

  1. Obvious: Technical rational decision making which gathers data from the past and use that to predict the future.
  2. Complicated What: Political decision-making which is focused on alignment, negotiation, and compromise that are used to create the organization’s agenda and direction.
  3. Complicated How: Judgemental decision-making heads towards an agreed upon future state even though the specific paths cannot be predetermined.
  4. Chaos: This is an area of avoidance according to the model. However, it is possible to increase certainty on the How or What and then reassess, but it is risky.
  5. Complexity zone: The is the zone of high creativity, innovation, and breaking with the past to create new modes of operating

Learn more about the Stacey Matrix.

Below I will describe some scenarios that I have run into, and then offer a couple of tactics that have helped me along the way, reducing frustration and helping me to be a much more effective and efficient designer.

The What

Like many other User Experience designers, early in my career, I almost exclusively thought about “The What” purely from a user-centric perspective. I mean geez… “user” is part of my title, isn’t that what is expected of a user experience designer? Countless hours spent interviewing the user, understanding the user, empathizing with the user, and creating artifacts that represent the user’s needs and process. These artifacts would help to guide me and my team away from the feeling of uncertainty and towards a polished high-fidelity prototype that was waiting to be tested and validated by the user. Unfortunately, right around this time, I would start to get push back from the stakeholders. Tensions would start to flare, and those ideas that showed so much promise during user testing would become unrecognizable iterations of themselves.

Like many other designers that have lived this truth, it was immensely frustrating, especially when user feedback is going really well. I’d sit there puzzled, unable to understand, until one day it hit me. I didn’t understand, and the reason I didn’t understand is that I rarely would take the time to understand my stakeholders, at least not with the same rigor I would put into understanding the customer I was designing for.

It turns out stakeholders have different needs and motivations that also require your attention. This lack of understanding between the designer and the stakeholder, as well as across stakeholders themselves, is a key indicator that you will likely have a very difficult time getting your solutions across the finish line. Departments will feel “unheard”, and you will likely get barraged by questions and opinions that dismantle your hard work, piece by piece. This sometimes leads to delays and, in extreme cases, rash decisions that are made by management to meet deadlines which are untested and introduce a lot of risk to the success of the product.

What we could do

Sales, content teams, customer service reps, and others across the organization may be very eager to see your solutions as it may drastically impact their world. Not understanding their perspective and the potential impact our solutions might have on them beforehand opens us up to being blindsided later in the process. Here are a few tactics that might help:

Rally around a common problem
It is impossible to get anywhere until you know where you are going. The problem is that all of your stakeholders may have a different destination in their heads. One of the very first things you need to do in order to set yourself up for success is having everyone agree on a common problem.

There are a variety of ways of doing this. Personally, I prefer running 1-on-1 semi-structured interviews with my key stakeholders, pulling all that information into an artifact that describes all the mental models that have been described to me and then doing a prioritization exercise with the group as a whole. In this project, I decided to break down each mental model into a task-based Service Blueprints, but it is best to pick the right tool for the right problem and context. There are no silver bullets.

In a recent research project exploring the notion of student content discovery, I ended up with 8 very different takes on the topic.

As-is Service Blueprints Artifacts Based on Stakeholder Interviews

With this artifact in hand, I was able to walk the entire group through each journey and then narrow down the project scope immensely. If I hadn’t done this beforehand, and I presented my stakeholders with any idea, no matter how well it tested, I would have had a hard time getting buy-in from the people that were looking at the initiative from a completely different mental model.

Affinity Mapping and Prioritization
If time is not on my side and I need to get rolling quickly, then a popular approach that I’ve used is a post-it activity where participants individually write down as many problems or opportunities on sticky notes for a few minutes and then converge all the stickies into themes. The themes are then voted on using dots (about 3 each), and you are left with a prioritized list of ideas.

Impact vs Effort
If it is not possible to test all the ideas that float to the top from the affinity mapping exercise then I ask stakeholders to further refine the problem spaces by introducing an impact vs. effort quadrant. This activity helps the team decide which solution they should pursue from a list of possible solutions provided from the dot voting exercise above. A team would typically pursue the problems that are “Quick Wins”, and possibly venture into Major Project that are both High Effort and Impact.

A warning on this activity. If you listen closely to conversations that participants have when discussing the position of each problem on the map, you might start hearing them talk about problems in terms of solutions. Ideating solutions at this point is a bit premature unless you’ve associated a Cynefin value to them and those problems fall in the “Obvious” and possibly the “Complicated” realm depending on the people in room since this is the realm of experts. Attempting to map Complex problems would be riddled in so many assumptions, that I’d find it hard to place any trust in their placement.

This mapping is also a little tricky because of the word like “Impact” and “Effort” are both a bit abstract. Impact on whom or what? The user, sales, or some random metric that each individual participant has in their mind? “Effort” is also very subjective and the relative cost of different types of effort could be radically different. For example, to cost for high development effort and high content effort (depending on the content) could sit at opposing ends of the cost scale due to salaries, ability to quickly scale, access to talent pools, and the experience needed to complete the task.

If you decide to go this route make sure people are very vocal on why they are mapping things the way they are, so everyone especially the decision maker has a clear understanding of the assumptions behind where each problem landed. If done well, this could be a very quick and effective manner of breaking down problems.

AJ & Smart uses this process as part of its Sprint Process. They describe it in the video below.

Multiple Criteria Decision Analysis?
If you can make the time, though, I have found much more value out of using Multiple Criteria Decision Analysis over using Impact vs Effort. However, it is not something you could do well in a short and concise collaborative activity.

Through the use of stakeholder interviews, you are able to understand what factors various stakeholders value. Then, you could start to add them into a rubric and agree on the weighting of each criterion.

Some of the criteria I’ve used in the past includes the impact on user experience, product differentiation, market size, technical effort, research effort, and various others. What I like about this approach is that I am confident that stakeholders and project sponsors have a shared understanding of what the team has agreed on to compare opportunities and/or solutions. I am also much more confident that I will be able to pull in the “right people” into the conversation, resulting in sizings that hold more agency. Developers could rate development effort and content people could rate content effort, and so on. Now the decider could set a weight distribution across all criteria to help them make a better decision.

Feel free to use the sample spreadsheet that I created, complete with weighting formulas.

The SWOT Analysis
All of your key stakeholders will have opinions around the initiative prior to starting any work. Some of the feedback might be positive, others might be negative. A simple way of capturing these opinions in a structured manner is by using a SWOT analysis.

A SWOT is broken down into a 2x2 grid that allows you to uncover your stakeholders’ thoughts around…

  • Strengths: What strengths does the organization have that places them at a competitive advantage?
  • Weaknesses: Where are we falling short and what should we avoid doing?
  • Opportunities: What trends are they seeing in the space? Are they aware of any gaps?
  • Threats: What are competitors doing? How is technology shifting?

This technique could be done either synchronously or asynchronously. I always prefer talking with stakeholders 1-on-1, but I’ve had success doing this activity using traditional affinity mapping, as well as remote collaboration software like Mural.

If you’d like to learn more about using SWOT Analysis, Codeacademy has a great post around the topic.

How will your stakeholders measure the success of the solution that you present?
It is helpful to get your stakeholders to not only talk about but agree on what they perceive as a success for your project. Is it lowering the call volume for a particular process? Is it increasing the engagement with a feature? Whatever that might be, it is important to understand what those are beforehand. This not only helps you better understand what they value, but it also helps when articulating design decisions if they are connected with your stakeholder’s own metrics of success. Let’s compare the following 2 statements.

  • We added the ability to find educational content using learning objectives.
  • You mentioned that content discoverability was the main concern of yours so we decided to add the ability to find educational content using learning objectives. Early research shows that this approach increases our content click-through rate by X%.

When it comes time to present your design to your stakeholders later in the process, having an agreed upon anchor to hang your design direction may help with reducing pushback later.

I highly suggest reading Articulating Design Decision: Communicate with Stakeholders, Keep Your Sanity, and Deliver the Best User Experience written by Tom Greever.

The How

It is far too common that developers are either not invited or not allowed to be a part of early-stage product design. Many times, the first glimpse of a solution that a developer gets in an Agile organization, is during an epic grooming ceremony. This approach can cause a lot of issues once developers uncover the complexities of implementation that only they would be able to identify. Similar to the challenges I’ve described with stakeholders, last-minute compromises are made by the team in order to keep the machine moving, and with that, risks and uncertainty about the new direction start to weigh down on the product designer and others.

While a significant amount of a designer’s time can be spent validating and invalidating assumptions that are being made around the problem they are trying to solve, very little is typically allotted to the validation of the technical assumptions that early design ideas and comps might be making. Getting a good sense of the “How” is complicated because it requires a keen understanding of the technology investment needed to implement the solution. What might seem obvious to an untrained eye, such as a designer or product owner, may require an immense amount of work. This leads to ridiculously large sizings of effort and resulting in delays.

What we could do

The most ideal solution to this issue is to have developers be a part of the discovery process early on. This is easier said than done and requires you to sell the value of their involvement to Engineering leadership.

While (in my experience) it is highly unlikely that organizations will allow a developer to sit in a design sprint for 5 days and not be coding, you could do a few things to get that feedback early. Here are a couple of ideas that have worked for me.

The Bat Phone
Most recently, thanks to our stellar Engineering leadership, each of our teams has been designated a developer contact that has been empowered to help UX and Product identify potential feasibility risks early. These are ad-hoc interactions that typically takes less than 15 minutes, but the value is priceless. As we start to narrow down on solutions they also offer what we call Pebble, Rock, or Boulder sizing which is meant to be a quantified SWAG (Scientific Wild Ass Guess). In order to create a safe space for sizing, we have intentionally not used T-Shirt size that developers would give during Story Sizing. This is meant to increase their confidence that they will not be on the hook for under or overestimating.

Expert Lens Dot Voting
It is not always possible to get a developer or other specialists that sit outside of the design team to offer feedback early and often. Additionally, sometimes developers won’t feel comfortable giving any type of size. In these cases, I’ve done a less formal sizing approach that helps identify some quick risk using a variation on dot-voting, and inspired by “Six Thinking Hat” voting.

The event is called an “expert lens dot voting” gallery walk, and the premise is quite simple. Each expert will vote through their lens of expertise. Along with a couple of developers, I might invite learning scientists, marketers, and salespeople. Each person is given a handful of red and green dots with their initials on them.

They are then asked to walk around a conference room that I reserved for the day and plastered sketches of all the ideas that came out of a Design Sprint or other ideation session. One by one they are asked to add green dots on ideas or features that they had high certainty that they could code, sell, or felt aligned nicely with learning research. They are also asked to mark anything that is a risk. In the end, you will have a good sense of which ideas would require the most amount of technical effort, had high confidence of being sellable, and offered the most learning opportunity for students. If you’d like to include an opportunity to gather qualitative feedback, you could encourage the experts to use sticky notes in order to add context to their +/- feedback.

Story Mapping
In most cases where time and development effort are at odds, it is important that you give yourself ample time to think through some incremental steps towards your ideal solution. The last thing that you want to do is have to make compromises on the spot, so coming in with a plan, even if it is in your back pocket, is super helpful. It is even more helpful if you draft this plan together: UX, Product, and Development to agree on an MVP.

Author/Copyright holder: Henrik Kniberg. Copyright terms and license: All rights reserved

An MVP offers the user the most feasible solution given constraints of time and resources, that delivers some value and could be incrementally increased over time. Story Maps take this concept and lay it out with stickies, serving as a release plan that everyone could agree on.

To learn more about Story Mapping I recommend you read User Story Mapping by Jeff Patton.

Conclusion

There is is no silver bullet to reducing the complexity of a problem. Some problem you will face will be harder than others no matter how much you prepare for them. I discussed this in a previous post on Communicating the Complexity of Design using Cynefin. That being said, by always having a pulse on your certainty of both “The What” and “The How”, you are sure to avoid some frustration in the future, and ultimately you and your team will be able to work more efficiently, building trust with your stakeholders along the way.

I would love to hear about your tips and tricks!

Additional Helpful Resources

New Haircut wrote a great article that gets into additional tasks that help you and your team be ready to run a Design Sprint, although I’d say these tactics are not exclusive to any particular methodology. I’d recommend you read the entire series.

Designer, Developer, Dad & maker of things that teach stuff. Director of UX at Macmillan Learning & Adjunct Instructor at NYU’s Digital Media for Learning