Algorithmic Gems — Using less space

Hamza Ibrahim
4 min readOct 7, 2015

--

In this series I talk about advanced algorithmic ideas that aren’t usually captured in textbooks. I try to keep it simple and light but feel free to leave a comment if you’ve a question and I will make sure to reply to you.

Today I’d like to talk about a couple of common techniques that help improve your algorithms space complexity, sometimes your solution doesn’t work because it requires too much memory or sometimes you want to save on resources, in those cases the two ideas I discuss here might help.

1. Recycle memory between iterations

The first idea to optimize memory usage occurs when you have an iterative algorithm, so you iterate to do some calculation.

In some cases you can allocate memory just enough for one iteration, then you can recycle and reuse it in subsequent iterations instead of allocating more space.

For example consider an iterative solution to calculate the Nth Fibonacci number, basically it starts by calculating the 1st Fibonacci number then the second and so on until it reaches the Nth number. An inefficient memory allocation strategy is to allocate an array of size N to store the Fibonacci number at each iteration of our algorithm. Why is this inefficient? Because at each iteration you really only need the value of the last two Fibonacci numbers, anything prior to the last two fibonacci numbers won’t be used in this iteration or any subsequent iteration.

So a much more efficient strategy is to allocate memory only enough for one iteration, in our case here just two units for the last two Fibonacci values, then we can recycle and reuse this allocated space in every iteration, this results in only a constant space complexity.

As a second example let’s imagine we have a 2 dimensional grid of N rows and N columns, and our algorithm iterates over the grid top to bottom and left to right, at each cell you calculate a value that only depends on the cell top and left neighbors, at the end you only care about the value calculated for the bottom right cell.

How much space would you need for such an algorithm? again a naive answer is to use a 2-D array to keep values calculated at each cell of our grid which gives us a quadratic space complexity. Can we do better? Yes, since at each iteration we only care about two previous cells and both are evaluated in the last N iterations of our algorithm then we can only keep around the values of the last N iterations, so nicely cutting our space requirements from quadratic to linear.

2. Normalize your state

The second idea I will talk about today for optimizing space complexity is applicable when your algorithm has a notion of a state and it needs to process many of those states, this is very common in recursive and dynamic programming problems.

Typically you would define a state by a number of identifiers, for instance if our state is a point in a 3D space then we can identify the state by three numbers x, y and z. Now sometimes it happens that you have additional knowledge which might allow you to reduce the number of identifiers that uniquely identify states, for example if it happens to be the case that any valid state must have the value of X+Y+Z equal to zero, then we can uniquely identify any valid state by the value of only two dimensions, say X and Y instead of 3 since we can always infer the third number given the other two.

By having a smaller set of identifiers we reduce the number of states and accordingly the amount of space required to store values for all states.

Alright, that’s all for today, I hope today’s topic was useful. As always I love to hear your feedback or any questions you might have. If you found this useful you can follow Algorithmic Gems on Facebook and YouTube, also please share this with anyone who might find it useful.

Thanks for reading!

--

--