The art of messing up a project

And how to possibly avoid it.

Kilian So
mutoco
Published in
9 min readJul 5, 2021

--

Leonid Pasternak
«Agonies of creation» by Leonid Pasternak (source: Wikimedia Commons, Elmira Karamniayfar)

For about 10 years I’ve been creating digital products as a designer & developer. As a part of great teams, I helped dozens of projects get off the ground successfully. However, a few times it (almost) went wrong. And the reasons were nearly always the same.

Based on my own experience and opinion, this blog post is an attempt to point out these reasons so that you can identify and avoid them in your projects early on.

You make a plan

No matter whether you want to start a small weekend project or have landed a big client job, there are fundamental questions to be answered at the very start. What is the point of all this and for whom? How and by when can what be created? Early in the process, most of these questions will only be answered by making assumptions.

To turn these assumptions into valid statements, there are many methods, principles and processes suitable for each project phase. Far too many to name here. And too diverse for me to make a blanket determination of which is the best approach.

The purpose of all of these is to help you to decide at any point in time, based on solid project planning, what to do next and whether you are still doing the right thing.

So, how is it possible for a project to stumble nonetheless?

You don’t stick to the plan

Only a few things in life follow a strict plan. Now and then, it is necessary to deviate and make adjustments. And that’s a good thing. You wouldn’t be able to react to changing circumstances without this flexibility.

Of course, this also applies to your projects. Gone are the days of rigid specifications that were adhered to, even though it was clear to everyone involved that the wrong thing would be created.

Agile and iterative models in combination with a designer mindset have become established in many industries. But agility can also be misunderstood. In my experience, it is often interpreted as a free pass for constant changes or even shifts in direction. As a result, a project can lose its clear vision and become increasingly confusing, making it difficult to understand why decisions were made.

In addition, such changes may result in the inability to meet the original schedules. Nevertheless, teams usually try to stick to it, which leads to more impulsive, faster decisions and thus usually to lower quality.

«Ecce Homo» restoration by Cecilia Gimenez (source: bytes daily)

Certainly, I am in favor of constantly questioning things or even quickly throwing them overboard if they are obviously a dead end. But please, do it systematically.

The very first thing for me is to adhere to mutually defined intervals. You can also call them sprints. Whether a sprint should last one, two or three weeks depends on the complexity of your project and the constellation of your project team. In sprints of less than a week, (except the Design Sprint) there is too little time for reflection and the impulsiveness grows. With sprints over three weeks, the process becomes too sluggish, blocking rapid change.

Should doubts, opinions or objections arise during a sprint, take them to the next instead of intervening in the current one. This creates time to validate these first.

You set the wrong focus

As soon as the most important WHY and HOW questions have been answered, the next step is to find the right focus and define the WHAT. This may only be a vague vision at the beginning. But at some point, it must be clear which properties and parts your product should have at least and which not.

In this context, the Minimum Viable Product (MVP) is often brought to the table as a curative approach. In my opinion, this is very suitable as long as it really is an MVP. Quite often I see that the focus of an MVP is determined by subjective discussions in the project team, rather than by reliable research. That can quickly lead to an excessive scope and a lack of resources for the really urgent and important things.

I have observed this effect to the greatest extent during relaunches of existing products. Experiences play a big role there. Things are often regarded as given and immutable, which perhaps shouldn’t be that way. Such tunnel vision can prevent actual improvements or even innovations.

«Mr. Babinet almost missed the comet» by Honoré Daumier (source: DigitalCommonwealth)

Every decision to be made is based on know-how and experience. If these values are missing, you either have to obtain them first or decide speculatively. Of course, there are phases in a project where speculation is definitely desired. For instance, in order to broaden the focus in an ideation session, or to try out something “lean” quickly. But as your project progresses, speculation is extremely delicate.

Two methods have helped me in the past to deal with that situation and to find the right focus. “Assumption mapping” and “Hypothesis-driven design”.

Assumption mapping is about determining how much uncertainty there is about a statement and what impact it might have. This can be statements like “I think we should increase the size of the logo.” to “We should fundamentally change the pricing model of our product.” — If all these statements are collected, written down, and located on an assumption map, a project team can work together to determine the focus. Anything with a high degree of uncertainty and risk is dismissed temporarily.

Simple matrix for assumption mapping we use at mutoco

Of course, a lot of potential innovation could be hidden in the uncertain high-risk area. However, before you tackle such things, you should do everything to remove them from this quadrant of the matrix. If you can’t, question yourself if this statement may be irrelevant and should not be pursued.

To validate a statement, it makes sense to formulate hypotheses. In this basic scientific approach, you try to fix an assumption as precisely as possible and then confirm or disprove it by testing, interviews and other research methods.

In this way, for example, the following statement…

“I think we should fundamentally change the pricing model of our product.”

…is transformed into the hypothesis:

“We believe that our main target group perceives our product as too expensive.”

After this hypothesis has then been tested, a validated statement may be formulated from it in turn.

“In a quantitative survey with 100 participants and qualitative interviews with 7 of our most important customers, we learned that our pricing is perceived as justified and fair. Therefore, we see no need for action to change it at the moment.”

These validated statements can then be placed on the matrix shown above and you can adjust your focus if necessary.

Altogether, it’s about identifying uncertainties and validating hypotheses through testing. And that leads me to the next point.

You do not test

Good products are tested extensively and continuously. Whether it is to pick up needs, to ensure usability, to compare variants, or to monitor success. All of that and even more can be thoroughly tested.

Although it should seem obvious why such tests are fundamentally important, in reality, they are often omitted. This may be for reasons of time, budget, doubting their usefulness, overestimating the experience, or a combination of all of the above.

«Test and observe» — John Wesley learns from an experiment (source: alamy.com)

In the absence of constant testing among the people you want to reach, hypotheses remain hypothetical and you never find out, or find out too late, whether your work will resonate positively or negatively with your audience. That could cause your entire project to fail.

From my experience, testing is often associated with effort that some would like to postpone or even avoid. The assumption is that constant testing could slow down the process. I can understand this attitude to some extent, but it is a fallacy that will catch up in a later phase of the project.

Quality interviews often provide you with the most valuable information and insights. Preparing, conducting and evaluating them, however, is time-consuming. This form of testing is not always suitable. If you need to do it quickly or if you need additional assurance, various tools offer different testing solutions. For instance, Maze and Useberry can be used to test a Figma prototype in a guided manner. You’ll get flowcharts and heatmaps to evaluate the results. If you want to validate conceptual hypotheses, you can use surveys via Optimalworkshop, Typeform or Google Form. If you lack contacts and followers who would be considered as test subjects, a company like Testingtime and many others can also help you with recruitment.

There are dozens of other tools and methods that can help you with research and analysis. A look at the UX Research Map from User Interviews is definitely worthwhile.

You and your project team need to determine the scope and interval of testing. Just do everything you can to reduce doubts and achieve an improved result with each iteration.

You do not iterate

Post-launch is pre-launch. Digital products in particular are never finished. There is always room to improve things. As previously mentioned, an iterative project model has become the standard. But after the big launch, these processes often dry up. Time and time again, I witness that a launch is perceived as a completion. Or as a transition into awaiting the return-of-investment before moving on with further improvements.

On the one hand, this is understandable. Very rarely, there are infinite resources available. On the other, it is problematic because the momentum to push the product is lost. In the worst case, your product is only “okay” for too long, so your audience turns away to something “ more awesome” in the meantime.

To avoid this, at least one iteration round should be planned beyond the launch. If this is part of the schedule and the budget right from the start, the necessity will hardly be questioned later on.

In addition, don’t rely solely on common key performance indicators to measure the success of a published product. Metrics such as those described in the Google H.E.A.R.T framework can help track the quality in terms of user experience and uncover previously unseen issues. Such insights will help you determine the focus and urgency of the next iteration.

A typical H.E.A.R.T template we use at mutoco

You won’t let go

Finding oneself on the wrong track is awkward and it's hard to admit it. Yet being aware of this can be a great opportunity to set things right.

If you suddenly have doubts about your idea or product, many things can be verified with the methods mentioned above. But there is also your own personal false path. Your output may be fine, but your commitment is zero. At this point, you should consider quitting the project and dedicate yourself to something new.

I have observed this several times with myself, especially with side projects. The initial enthusiasm flattened out and then the project just floundered. Once I realize that I try to question my role. Am I blocking the project, and would it perhaps be better if someone with more drive takes over?

When it comes to programming, open-source software could be a solution. Even if no one is actively developing your project, the public code can help build something new. For product ideas and designs, publishing on social media is helpful. This way you share interesting content and increase the chances of reaching the right people for your project.

For contract work, this is more challenging of course. But even here, it does nothing for the team or the product to hold on to a role that you don’t really want to fill, or don’t want to fill anymore. There is no ideal method for this and certainly no right or wrong way to do it. It’s just a matter of listening to your gut feelings and being aware of whether they could negatively influence or even ruin the product.

«The youngest son’s farewell» by Adolphe Tidemand (source: alamy.com)

What about you?

Now that I shared my perspective, I’m curious to hear your opinion on this topic. What are the biggest problems for you that can cause a product to fail and how do you prevent them? Leave a comment or send a message to kso@mutoco.ch.

--

--