# The 5 Pillars of Complex Problem Solving with Code

## A detailed walkthrough of the essential steps

Oct 1, 2019 · 11 min read

After spending hundreds of hours helping people take their first steps with code, I developed a five part model for problem solving. In fact, it’s even easier than that, as it really breaks down to three core skills, and two processes.

In this piece I’ll explain each of them and how they interreact.

# Problem Decomposition

“How do you eat an elephant?”… “Easy — One bite at a time”

— Anon

Elephants are big. Trying to swallow one whole would lead to some pretty aggressive indigestion. Some might even say it’s impossible. Eating an elephant is a good metaphor for solving a complex problem — you can’t solve it all at once and it will hurt to try!

The only way to eat elephants and solve complex problems it is to break them down into smaller chunks, until you’re faced with a bunch of chunks that you can swallow. Yes, that might equate to a mountain of chunks — but each individual chunk is small enough to swallow. As long as you take it one chunk at a time and keep moving, you’ll get to the end eventually.

I like to think of problem decomposition as like a hammer, or sometimes by a scalpel. Struggling to work out how to solve a problem? Smash it into two or three smaller problems. Are any of these new, smaller problems still too difficult to get your head round? Take a scalpel to them and slice them up further.

This all seems pretty obvious, but decomposing problems is an art form. Sometimes it can feel like you’ve broken a problem up, but in fact you’ve just reworded it and avoided the problem altogether.

An example might help.

Two of your colleagues, Mr Lazybones and Ms Smartypants, are tasked with coming up with a process for deciding who is tasked with locking up the office each evening. All you’re told is that it should be random.

Mr LazyBones is quick to offer his solution: “Easy”, he says, “every night, at six pm, we’ll randomly decide whose turn it is to close the office.” The rest of your colleagues applaud lightly and look around, somewhat confused.

Ms Smartypants, however, is a little slower to respond. “We have 20 of us in the office. My process will allocate everyone in the company a number, according to the alphabetical order of their last name. Each day at midday, we’ll roll a 20 sided die, and whoever’s number comes up is closing that night”.

Do you see the difference between the two solutions? We’ll go on to discuss better solutions to this problem later, but for now, try to see why Mr Lazybones hasn’t really decomposed the problem and why Ms Smartypants did.

By saying ‘we’ll randomly decide’, Mr Lazybones has in fact pushed the problem on to someone else in the future. The solution they’re looking for is how the random allocation will be done. Deciding to do it every night at six pm doesn’t address the problem. On the first evening that this new process is put into place, one of the team will inevitably have to restate the original problem: “So, how do we want to randomly allocate who’s closing?”

In contrast, Ms Smartypants recognised that the real problem was how ‘random allocation’ would be done. They deconstructed the problem into a few steps. Admittedly, they could probably have come up with a better solution (we’ll come to that…), but for now, they actually offered a real solution by decomposing the problem:

Simple, right?

# Algorithms

There are many definitions, but for simplicity’s sake, let’s define an algorithm as:

“A process or system of rules to be followed in a problem solving operation”

You can think of algorithms like cooking recipes — a sequence of actions and steps that take you from the problem to the solution. Just as with those little maze games we all played as kids, it’s usually best to start at the goal and work backwards.

Algorithms consist of all the stuff you’ve tried to learn on Codecademy or FreeCodeCamp — control flow, comparison operators, variables, methods/functions, collections/arrays, iteration and more. The key thing to note is — these core concepts and processes are universal. Sure, there’s one way to do a for loop in Ruby and another way to do it in Java, and of course if statements are written different in Javascript than they are in Assembly, but the fundamental concepts are the same.

That’s why I tell my tutees to focus on these universal fundamentals. Because they’re a basis for all structured thinking, of which digital skills are a particularly useful and popular manifestation right now.

This is also (part of the reason) why most programmers say that the language you use is incidental . All the real, hard work happens in this abstract, problem solving space, using universal concepts like control flow and iteration etc, as mentioned above. Turning your solution into the specific syntax of a specific language is the last mile of the marathon, and it happens when all the hard work is basically done.

To continue with the previous example, Mr Lazybones did tease out some of his algorithm — he wanted to run the code daily, and decided it would be at 6pm (who knows why). But, he failed to explain his algorithm for random allocation. Either some bright spark in the office would have mentally-modeled his suggestion and understood that it doesn’t solve the issue at hand (we’ll discuss this more later), or when the process was implemented it would have been clearly shown to be insufficient as a solution to the problem.

Ms Lazybones also used some similar elements. A daily sequence, also run at a specific time each day. But critically, they offered an algorithm — assign each person in the office a number (hopefully by now you see the word assign and start thinking about variables!) according to the alphabetical ordering of their last name (hey — we know about sort methods!), then roll a dice with as many sides as there are members of the team (.length coming to mind anyone?!) — The number that comes up decides who’s closing.

I strongly encourage you to try to do your algorithmic thinking using universal concepts and not code syntax. In reality, it’s very hard to separate the two — and this is why I like to start learners with Ruby. It’s (one of) the most forgiving, English-looking languages out there, so you don’t get caught up with language-specific idiosyncracies while developing your solutions in this abstract problem space.

# Diverge and Converge Ideation

Sounds fancy, huh? In reality, it’s pretty simple. It’s just a stuffy way of saying ‘coming up with lots of ideas then cutting out the ones that don’t make sense’.

You can think of the ‘diverge’ phase like the typical ‘brainstorm’ (sorry — “thoughtshower” — it is 2019 after all). At this stage you go cray cray, throwing in every idea you have, no matter how wild it might seem. It’s one of those ‘all ideas are good ideas’ kind of phases. It can be useful, if you have the time and inclination, to do this with Post-It notes, in a notebook, or even, as is very common, with simple comments in the code.

Converge is the opposite — removing options according to whether they’re viable, achievable, elegant, easy to implement, or whatever particular constraint you’re solving for.

Diverge and Converge are like the in-and-out breaths that move you between your ideas for decomposing a problem into smaller problems, and your ideas for creating algorithmic pathways to move through these sub-problems. They’re also a powerful reminder that there is no such thing as a single, “correct solution. Rather, there are a bunch of ways to solve a problem — you just need to find one or more that are coherent, and then work out how to turn them into syntax that the specific interpreter you’re working with requires.

Here’s a diverge on the previous solution we discussed:

• Everyone in the office will pick a tree in the local park. Every morning, whoever’s tree is closest to the spot where the office dog decides to pee — they’ll close the office.
• Everyone’s names are put into a hat, mixed up, and every morning one name will be pulled out of that hat — that person will close the office.
• Each morning, we’ll scan the Daily Mail Horoscope and debate and decide which zodiac sign seems to be being told it’s their turn to close. If more than one person in the office shares that zodiac sign, they’ll do Paper-Rock-Scissors over who closes.

Now we have three more ideas to add the one proposed by Ms Smartypants. That’s a nice healthy diverge.

Converging from these four, it seems pretty clear that the original suggestion to allocate numbers by last name and then use dice and my number two suggestion of picking from a hat would be the easiest to implement and probably the fairest. It’s often easy to discount certain ideas which is why, in the real world, the crazier ideas often don’t even get verbalised or discussed. That’s fine — whatever works for you.

Even so — the two remaining ideas both seem perfectly viable, right? In fact, there is no single right answer here — we should let go of the desire to find one. This isn’t school — your goal is not to impress anyone with your ability to regurgitate the correct facts. Your goal is to develop confidence, creativity and mastery through practice. In the end, it comes down to you — which solution do you prefer? Which is the easiest to turn into code? If you decomposed your problem well, you should be able to come back and fix it if you later decide that you made the wrong choice.

That’s really the core of our theory on problem solving. We only have two ‘spaces’ or ‘processes’ left to discuss.

# Mental Modelling

Most mental modelling happens in your head. If you’ve been trying to learn any programming before today, it’s more than likely that it’s been happening entirely inside your head, without you even realising it was happening at all.

There’s nothing inherently wrong with modelling in your head, but I for one can’t hold more than a few things in my head at once without it exploding. As a result, especially as the problems we’re facing become more complex, it’s good to get the models out of your head and onto paper. This will make it a lot easier to do the problem decomposition, algorithmic solution design and diverge-converge ideation that we’ve already discussed.

Think of mental modelling as symbolised by the whiteboard, or maybe even a sandpit. It’s the space in which those three core problem-solving processes happen. It’s where you break complex problems down, try out lots of algorithmic solutions and move towards a solution that works for you. It’s the ‘space in the middle’ where these three concepts interrelate.

Some people like to jot down their ideas for models in a notebook, some people like to draw system diagrams or bubble diagrams or CRC Cards, some people just like to talk out loud or throw comments into their code. However you do it, you should do it consciously, actively, and scientifically (more on this at the end!)

The last point I’ll add is this: research shows that around 50% of people are incapable of forming mental models. That’s fine, of course, but for the 50% of people that can’t form mental models, I’m sorry to say that your coding career is likely to never amount to more than a cameo. That’s not to say that 50% of people will find this modelling stuff easy — not at all. Everyone finds complex problem solving hard at the start but with practice around half of you will get the hang of it, and around half of you will be constantly frustrated by it, and soon you’ll just start to hate it. If that’s you, that’s cool — at least you can close the door on this coding malarkey and start looking for other hobbies that interest you.

## Build-Measure-Learn loop

The final pillar in our complex problem solving Parthenon is the core process you follow throughout your problem solving adventures. It comes in most handy when you start trying to turn your abstract solution to a problem (which you will have developed in the previous four phases) into working code. This build-measure-learn loop goes round and round, forever. Figuratively it’s much like the in-out breathing we discussed earlier with diverge-converge ideation.

Put simply, Build-Measure-Learn is a sequence of steps you take as you move from the unknown to the known in complex problem solving. It’s a basic set of steps to help you find your way through the fog of uncertainty. It can almost be seen as a compass that helps you to ensure you’re on the right track.

You simply try something — ideally the smallest thing you possibly can do, then measure its effect, learn as much as you can from it, and then either redo what you just built (if it doesn’t agree with your expectation), or move onto the next step (if the previous step did what you expected). It’s the scientific method in action, translated into terms that programmers like to use.

Another way to look at it would be to use the scientific method itself as your guide:

Trying to solve problems without using the scientific method is like playing darts wearing a blindfold. At the heart of the scientific method is the loop of Hypothesis-Prediction-Testing-New Hypothesis. For programmers it’s even easier than this, because the ‘testing’ part is almost always just ‘run the code and see what happens’. All you really need to do is make sure that whenever you write a line of code, you have a hypothesis (or prediction), before you execute the code, of what that code will do after you run it.

Hypothesis…Prediction…Testing == Build…Measure…Learn

This is critical — and you’ve probably already picked up on it from previous essays, as I discuss it a lot — you should always have a hypothesis when you write any code. Always. If you don’t, you’re not being a scientist. You’re doing what some call spray and pray coding, which is generally thought to be a rather disparaging comment.

It’s a simple but powerful process. Every time you go to write a piece of code, ask yourself this — what’s my expectation? What do I actually think this code will do? Don’t allow yourself to run it until you’ve made a clear prediction. Then, when you run the code — were you right? If you were, you understand this coding concept and can move on. It didn’t do what you expected? Now’s the time to pause and set about finding where the gap in your knowledge was, by forming a new hypothesis on which part of the code is wrong (tip: as discussed previously: error messages are a great place to get ideas).

I’ll end this with a final exhortation: if you want to accelerate your attempts to learn to code, stop memorising syntax. Focus on your ability to deconstruct a problem, your algorithmic solution-finding skills, and your ability to apply the scientific method when programming. Then keep practicing until all this comes as natural as breathing.

Good luck!

Written by