Solve the problem before coding it up
When I was preparing for technical interviews I was lucky to get some help from friends who has experience with programming contests. One of the valuable lessons I learned was about solving the problem before coding it up: I thought I was doing it, but it turned out I was doing it wrong.

There are a lot of good resources about technical interview preparation and some of them suggest to perform the interview in 5 steps:
- Ask questions to understand the problem
- Create a simple but illustrative example
- Reason out loud and develop an optimal solution
- Implement the solution: write code
- Test code
The sequence helps to structure the solution process and gradually make progress covering all of the important parts: optimal solution, its complexity, the code itself and testing. I was using this sequence and it was working well, but during the mock interview with the friends I got the following feedback:
When you are outlining the solution — outline the idea, not the code you are going to write. For instance don’t say “I would have two for loops with indexes i, j and I’ll compare arr[i] with arr[j]”, say “Let’s look at every pair of numbers in the array”
I tried to follow the advice when solving the next problem and found it to be quite difficult to do: it turned out that my skill of looking at the algorithm separately from code was not good enough.
My friend explained, that while it may seem that technical interview is about coding, it’s actually more about problem-solving, which is a completely separate skill. First, the algorithm should be developed, and then the code would follow naturally as its derivative. When doing both things at the same time you could run out of time explaining implementation details that might be irrelevant if the approach is not optimal. Or get lost in details. Or manage to come up with an algorithm which “should work” but you are not sure.
Here is an example.
The problem
Let’s consider a relatively simple problem: given an array, output another array, where the number at index i
is a product of all numbers in the original array except the number at index i
. So if our input array is [1, 2, 3, 4, 5]
then the answer should be [2*3*4*5, 1*3*4*5, 1*2*4*5, 1*2*3*5, 1*2*3*4]
which is [120, 60, 40, 30, 24]
.

The solution which is too close to code
There is a trivial solution: create a variable product=1
and then multiply it by each number in the original array. Then the numbers in the result array would be result[i] = product / original[i]
.
Simple enough, except there is a catch: what if one of the numbers is 0
? Then our program would crash because of the division by 0
. Not good, but it’s easy to fix: while computing the product skip the 0
and then while computing results write 0
everywhere except for i
where original[i] == 0
. There it would be the product that we computed.
The crash is fixed, but there is a bug now: if there are multiple zeroes, then all of the elements of the resulting array should be zeroes, but our solution would produce a different result. To fix it we should only skip one 0
while computing the total product (and still skip division by 0
).
Let’s make the problem harder: what if we can’t use division?
Then when doing multiplication we have to avoid multiplying by original[i]
. It’s trivial to do so when computing each number independently, but that would be O(n^2)
complexity — not good enough.
The key idea here is that we could store the multiplication results as we go in a new array: for example if we start with [1, 2, 3, 4, 5]
then we can keep the current product in product
and store its value in a new array on each step: we start with 1
, then multiply it by 1
, then multiply it by 2
which gives 2
(store it), then multiply by 3
which gives 6
(store it) and so on.
Then we do the exact same thing but in the opposite direction: we start with 5
, then multiply it by 4
which gives 20
, then multiply it by 3
which gives 60
and so on. Then if we would multiply result[i]
by the product, at each step, we would get our answer.
So the algorithm is:
- Create a new array
result
of the same size asoriginal
- Create variable
product
and set it equal to1
- Run a
for
loop fromi
going from0
to the size of theoriginal
and on each step:
Multiplyproduct
byoriginal[i]
, then setresult[i]
toproduct
- Reset
product
back to1
- Run another
for
loop withi
going from the size of theoriginal-1
to0
and on each step:
Multiplyproduct
byoriginal[i]
, then setresult[i]
toresult[i]*product
- Return
result
We do two scans through the original array, so run time complexity is O(n)
, we also use additional O(n)
space to store result
array.
Let’s see how things would change if we would focus on the problem and would try to avoid using any specific implementation things as much as possible.
The abstract solution
The trivial solution is simple: we multiply all elements of the array between each other and then in the resulting array element at index i is product / original[i]
.
If there is one 0
in original
we compute the product of all numbers except the 0
and the resulting array is all 0
except for the number at position i
where original[i] == 0
. This element would have the value we computed.
If there are multiple 0
s in original
then return an array with 0
s.
What if we can’t use division?
The idea here is that we should store multiplication results. We want to find the product of all elements except at one at i
, so element at i
in the resulting array is the product of all elements in original
before i
multiplied by the product of all elements in original
after i
.
To compute the result we can create two arrays, l
and r
where:
l[i]
is equal to the product of all elements inoriginal
beforei
.r[i]
is equal to the product of all elements inoriginal
afteri
.
Then each element in the resulting array would be a product of l[i]
and r[i]
.
Algorithm:
- Create new array
l
of the same size asoriginal
- Fill
l
sol[i]
is equal to the product of all elements oforiginal
beforei
- Create new array
r
of the same size asoriginal
- Fill
r
sor[i]
is equal to the product of all elements oforiginal
afteri
- Create new array
result
of the same size asoriginal
- Fill
result
soresult[i] = l[i]*r[i]
To make solution more optimal we could observe that we actually don’t need two arrays: we could fill result
as we would have filled l
, and then while doing iteration for r
update result[i]
instead.
Is there a difference between those two descriptions? Aren’t these about the same thing but from a slightly different angle? The second solution looks more formal, but the first one provides an almost complete implementation. Let’s try to implement the solutions based on the described algorithms and see how it would go.
Implementation of solution 1
The implementation should be straightforward and follow the steps of the algorithm:
Looks good, except for the fact that it is incorrect. The returnedresult
for our example array would be [120, 240, 360, 480, 600]
, while it is supposed to be [120, 60, 40, 30, 24]
.
When looking at the code with the knowledge that it is incorrect it is easy to see where the error is: result[i]
stores the product of all elements of original
before i
and including i
. So with a slight adjustment of our for
loops, we should get it right:
Ok, we’re closer, but the answer is still wrong: [0, 60, 40, 30, 24]
. Everything but the first element is correct, and it is incorrect because it wasn’t set in the first for
loop. Therefore 120
multiplied by 0
gave 0
. One more fix and we should be good:
And finally, we have [120, 60, 40, 30, 24]
. It works correctly for our test example.

We have a solution that seems to be correct, but we had to patch two holes before arriving at it. In a technical interview setting where we code on a whiteboard, each iteration would take time to go through a test case, so for a more complicated problem, it could be possible to run out of time before getting the right solution.
The resulting code is also not perfect: there is a special initialization of the result
array which requires a better comment for it, and cycles code is also quite specific: the first loop starts with the second element (index 1
) and works with i-1
, the second loop starts with the second element from the end and deals with i+1
. It is a kind of code that is better not to touch if it works.
Let’s see what would we get from the implementation of the second algorithm
Implementation of solution 2
The implementation is also quite easy:
And the answer was correct from the very first attempt. Granted, we’ve learned lessons from previous attempts, but still, this implementation is more clear: it makes more sense what’s going on with indexes and iteration because we know what this iteration should produce.
This, however, is not the optimal solution. Let’s optimize it to use less space:
It was slightly challenging, but the answer to the test case is still correct from the first try.
How about the test case with 0
: [0, 2, 3, 4, 5]
? The program outputs [120, 0, 0, 0, 0]
which seems correct.
How about the test case with two 0
s: [0, 2, 3, 4, 0]
? The result is [0, 0, 0, 0, 0]
.
Looks like the solution is working.

The difference
Besides the difference in the level of detail, there is another one: the first algorithm describes the step-by-step mechanics of the solution, the second algorithm makes it easy to prove that the algorithm is correct.
Is the first algorithm correct? Well, it’s kind of is, but it is really hard to say.
Is the second algorithm correct? We obtained the resulting array by multiplying a number which is a product of all elements in original
before i
by a number which is a product of all elements in original
after i
. That is exactly what was asked in the problem: product of all elements in the array except the one at i
.
We could also note that while the first implementation is one big chunk of code involving multiple steps, the second implementation is also a sequence of steps, but 1, 2 and 3,4 are sub-problems and could be easily moved to their own functions (not that we should do that, but we could).
If you need to squash corner cases one by one like in whack-a-mole and code is a bit of a mess it could be a sign that you are looking at the problem at a wrong angle. A slight change in perspective could help to convert a messy if-for-if logic in an elegant computation:
This is what Eric Evans calls “a breakthrough” in his “Domain-Driven Design” book. It could take a while to arrive at such a point of view, but these efforts would be rewarded with a clear solution that is easy to prove and implement. And “problem thinking” is a tool that could help a lot in finding it: being able to abstract from implementation details we could evaluate and try out different algorithms with less effort.