Jumping down complexity’s rabbit hole

Dolev Pomeranz
7 min readDec 24, 2022

--

Why should I read this?

To improve your understanding of complexity. We will portray some fundamental characteristics of this abstract concept. Illustrating them using some examples and thought experiments.

This is the second part in a series on complexity. The first part is The hidden cost of complexity which focuses on one characteristic, the deadly cost of complexity. Now we will address some additional ones.

Characterizing complexity

Complexity is such an elusive and abstract concept. Let’s continue to uncover it, shedding light on its properties. Time to jump into this rabbit hole.

Jumping down complexity’s rabbit hole! (I created this image using the Stable Diffusion Artificial Intelligence model. Using the prompt: “A rabbit jumping into a rabbit hole in space”.)

Necessary complexity and unnecessary complexity

Complexity can not be totally avoided. Solving any problem requires some amount of complexity. This pushes us to define two concepts:

  • Necessary complexity- The bare minimum complexity required from you to solve a problem.
  • Unnecessary complexity- All additional complexity to the necessary complexity, that is not really required from you to solve a problem.

Let’s examine these concepts using a concrete example: autonomous driving. Say you want to build a self-driving car. That car would need some sensors to collect data on the world around it. We know that human drivers mainly use their eyes for that, but self-driving cars use in addition to cameras also radars and lidars (light detection ranging). In the video linked below, you can see Andrej Karpathy’s (former Sr. Director of AI at Tesla) approach toward those additional sensors. He admits that there is value in using them, but that it is possible to solve the problem without them. He also notes that the cost of adding them is too high compared to the gain they provide. Using our terminology, Tesla’s approach defines cameras as part of the necessary complexity required to solve the problem. However, the other sensors add unnecessary complexity. Are they right? It’s hard to tell, as correctly classifying necessary and unnecessary complexities is not a trivial thing. Probably other car vendors might not agree with their approach. But Karpathy provides an even stronger statement, saying he believes it’s a matter of time until the other vendors realize that and abandon the additional sensors.

Andrej Karpathy’s (former Sr. Director of AI at Tesla) approach toward additional sensors

In general, we should try to avoid adding unnecessary complexity. But that might require us to challenge our assumptions and even experiment with what can be removed without compromising our ability to provide a solution.

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” Antoine de Saint-Exupéry

Complexity is not static

Let’s do a thought experiment. Consider the following two problems:

  1. Digitally sharing an image with billions of people (via their mobile devices).
  2. Physically putting an image on the moon.

Which problem is more complex to solve? Instead of actually designing a solution and evaluating its complexity, you can estimate complexity according to the minimal budget you require to solve each problem. For example, the SpaceIL Beresheet lunar lander project budget was about 90M$ in 2019. So paying social media influencers to publish an image should be much cheaper. But what if I ask the same question and now we move back in time to 1975? Back then, it was possible to send someone to the moon. Of course, that would have been harder than today. Yet digitally sharing images to devices at the palms of users seemed like science fiction. So as opposed to today, putting an image on the moon in 1975 was probably less complicated than digitally sharing that image with billions of personal devices. What about 2075? Maybe then there will be a lunar base. Assuming there is a printer at that base and similarly, as in 1975, it might be simpler than digitally sharing it with billions of users.

A photo of a rabbit on the surface of the moon. (I created this image using the Stable Diffusion Artificial Intelligence model. Using the prompt: “A page with a photograph of a rabbit is placed on the moon surface”.)

Some observations from this thought experiment:

  • Things will become simpler over time- As a rule of thumb, and not as a law of physics. But you should expect that for any given problem, the necessary complexity required to solve it reduces over time. The danger here is to be blinded by our static assumptions. Assuming something is just too complicated for us at some point, and keeping this assumption even after it’s not the case anymore. Just because we rarely revise assumptions. On the other hand, you don’t have to continuously try to challenge your assumptions. Doing that in specific checkpoints is good enough. For example, take the design phase of a new development. It’s a great time to evaluate whether something inside the design scope should be simplified.
  • The relative complexity between problems can shift- As the rate of simplification is not constant among problems and solutions. A problem that was once more complex to solve than another problem, can be simpler to solve in the future. Thus, a decision on which problem to tackle or even which solution to employ might be different based on the time you take the decision.

Let’s explore this with a real example. Many organizations build internal tools. These are usually 2nd class citizens, and won’t get the same resources as customer-facing products. We had the same problem and often internal users felt the starvation such resource allocation creates. We all had the assumption that building an internal tool could be either a standalone tool (e.g., a dedicated web application) or a feature in the main product visible only to internal users. So it happened and one talented group manager decided to challenge this assumption and started a pet project. Interestingly, he advocated building an internal tool as a Slack bot, a relatively new concept back then. By doing that he managed to exploit the dynamic nature of complexity and find a much simpler solution to the problem. Both user management and the development of a dedicated user interface became unnecessary complexity when using a Slack bot approach. The product complexity also reduced, as onboarding and adoption became much simpler. It allowed him to quickly develop a bot that was easily accessible to other employees, leveraging Slack capabilities. He stood on the shoulders of the Slack giant.

“If I have seen further it is by standing on the shoulders of Giants.” Sir Isaac Newton

A giant rabbit. Not sure it is wise to stand on its shoulders. (I created this image using the Stable Diffusion Artificial Intelligence model. Using the prompt: “A giant white rabbit”.)

The result was an impressive adoption by internal users that led to the addition of many other features. For more details on the Slack bot check out this blog post.

Usage of the Slack bot. In one year it was used over 38,000 times. The different coloring represents different features of this bot.

A note on terminology

Fred Brooks coined the terms accidental complexity and essential complexity in his famous paper “No Silver Bullet — Essence and Accident in Software Engineering”. Following Aristotle’s definitions of accidental and essential properties of a thing. E.g., if a chair is built from wood or metal it’s accidental to the essence of it being a chair. Then why define two new terms? It’s not renaming, they are simply different. The subtle differences are:

  1. Perspective- The new terms are about the complexity required from you or your team to solve a problem, and not about the complexity of the entire solution. Mainly because that’s what really matters to you. Using garbage collection for managing memory, cloud vendors for hardware requirements, or even Slack for creating internal tools, offloads responsibility and thus complexity from you. It makes some of the complexity unnecessary for providing a solution.
  2. Dynamicness- As seen above, problems become simpler over time to solve. This means that today’s necessary complexity can become unnecessary tomorrow. You won’t need to buy or build a rocket to the moon just to place a photo on its surface if people are already living there.

What’s next?

This and the previous parts are not that actionable. However, in the third part of the complexity series, we will discuss ways to actively combat complexity. E.g.:

  • Considering it in our decision-making process
  • Continuously and proactively engaging with it
  • Educating for simplicity

Conclusion

Summary

Being able to identify necessary complexity and unnecessary complexity will help you understand what needs to be avoided or even removed. It starts with the awareness of these types, having them part of your professional language.

When we provide a solution to a problem we build on top of things created by others. As new ideas and products constantly emerge, they hide inside more of the complexity from us. Allowing us to simplify solutions. Some people see these emerging simplification paths first and can gain an advantage over others. Being open to questioning your assumptions, can help you remove the blindspots which prevent you from seeing them as well.

Acknowledgments

Many thanks to Pol Miro Omella, Yanai Ron, Nir Hemed, Nimrod Shai, Mark Cook, Yael Lubratzki, Assaf Pinhasi, and Ori Feldstein for their insightful comments.

--

--