Less is more. Eliminate overhead, focus on value.
Agile product development ist all about deltas. Each small increment is a delta in functionality. Each colleague added to the team is a delta in capacity. Each lesson learned is a delta in our understanding of the market. And each of these things is a delta in our overall planning and roadmap.
Today, I would like to highlight one particularly interesting special case of this. I will outline what it takes to increase the value we deliver (delta greater than zero) while reducing our costs (delta smaller than zero) at the same time. Obviously, this is a counterintuitive but very desirable outcome. And it is best understood with a visual model. Here we go…
The model
This model is a very simplistic view on product development: each feature has associated effort — first of all for development — and value. Both are hard to measure and even harder to predict which makes it so difficult to implement the right things in the right order. Note that even with good product management, we cannot assume being aware of the real figures before doing the work — so this remains a qualitative model. But still, having a look at the levers we can pull and their impact can be very helpful.
So here we are, with a bunch of potential features, some of them implemented, some of them dropped. Based on this decision, we can visualize the total effort as well as the total customer value for all implemented ones. In all upcoming states of the diagram, the black lines will remain unchanged to allow comparison with this initial “benchmark state”.
Scaling up
Almost everyone wants to be faster, to offer more functionality, to outperform their competitors — to deliver more customer value. In that case, a common first choice is to scale up what we currently do: more ideas, more implemented features, more customer value — but of course this goes along with more effort. It might be a totally sane decision. But there might also be alternatives.
Descaling
I am convinced that less can be more and that we should consider other ways of leveraging customer value before we simply scale up what we currently do. So let us revert the scaling and explore some alternatives we can take into account.
LESS Operations
Not all work we do affects customer value by any means. Yes, we ideate, design, implement, and test features which is essential. But we also deploy them on different environments, we maintain these environments, we do technical groundwork for every new service, and in many organizations all of this makes up a significant proportion of the total effort.
Well-managed cloud infrastructure, with my personal first choice being AWS, takes away large parts of this burden. Fully automated deployments just work, rollbacks are easy, new resources are at your fingertips at any time, and physical hardware will never be an issue. Yes, all of this alone does not increase the value we deliver — but it reduces effort.
LESS Effort without Impact
Besides operations, there are other things in software development that somehow seem to steal our time over and over again. We need to do basic and sometimes fancy UI designs, we have to consider internationalization, we must authenticate our users. We also need to integrate our service logic and connect external data sources. And that is just naming a few.
By knowing and using suitable tools and frameworks for each of these challenges, we can reduce the effort of doing so. We can use standard solutions for standard problems — again, AWS has a lot to offer here — and focus on the parts that are really unique to our domain. We can eliminate overhead and reduce effort without impact while delivering actual customer value instead. Last but not least: We stop, when a solution is good enough for the corresponding problem and do not pursue perfection.
Scaling up again
With these two measures combined, we can push the scaling button once more. Still, we get more implemented features and more value. But suddenly, our total effort is comparable with our original one. We might even be able to pull this off within the same period of time without hiring a single new engineer.
Nevertheless, there is another lever we have not pulled yet, so let us descale again and take another look starting from the previous state of the model.
LESS Waste
We have already seen how LESS operations and LESS effort without impact enable us to achieve the same outcomes with less effort. Note that the benefits from these effects will never be higher than when we run experiments. Experiments start with a hypothesis which is validated or rejected by building a minimalistic version of what we have in mind and collecting feedback, preferably by measuring user behavior.
These insights provide invaluable insights that enable us to prioritize and work on the right things. But at least in a more traditional way of working, experiments are incredibly expensive: often they involve completely new services with a lot of effort to get them up and running in the first place, additional environments for A/B testing, etc. In a nutshell, we have to deal with all the overhead of a full-fledged service, but with a minimal set of functionality. This is why eliminating overhead is a real game-changer here.
So with the optimizations that are in place already, we can afford to run much more experiments and collect fast feedback on a lot of things. We can try more, learn more, and hence reduce uncertainty regarding the real value and effort for each feature. With that, we can make better choices when it comes to prioritization in order to reduce waste. It is easy to see how our delivered value increases while our effort decreases in comparison to the original picture.
By the way, the effect we just described here is why Jeff Sutherland, one of the pioneers of the Scrum methodology, titled his flagship book “Scrum — The Art of Doing Twice the Work in Half the Time”. This, and not a specific process blueprint, is the essence of any agile product development effort.
The final Scaling
If we want to deliver more at this point, we can of course still scale up. But when comparing the final outcome with only scaling up and changing nothing else, there is a massive difference.
Here is where we end up in our model after applying LESS operations, LESS effort without impact, as well as LESS waste and then scaling it up.
For easy comparison, here is again the very same model after the initial scaling without considering the other optimizations.
Final thoughts
Of course this is a simplistic, somewhat exaggerated and purely qualitative model. But still, it helps a lot to understand where our initial intuitions tend to be wrong. Many things are not strictly correlated, and often just doing more of the same is not the smartest way to achieve the greatest possible value.
Interested to see how we apply these principles in practice? Feel free to reach out or join our webinar together with AWS on 24.02.2021! Excited to experiment and optimize with us and our clients? Your application is just a fingertip away! In any case, find our website and get in touch!
This blogpost is published by Comsysto Reply GmbH