# Optimizing the dynamic programming solution for the Knapsack Problem

Previously, I wrote about solving a couple of variants of the Knapsack Problem using dynamic programming (“DP”). If you haven’t read them, or if you need to refresh your memory, you can check them out here and here.

In those solutions, we build a 2-dimensional array of size *N * M *(representing a table of *N * M* cells), where *N* is the number of items which we can pick from, and *M* is the number of units of capacity of our knapsack.

Today, I’ll describe how those solutions can be optimized to use only a 1-dimensional array. You can think of this optimization as reducing space complexity from *O(NM)* to *O(M)*, where *N *is the number of items, and *M *the number of units of capacity of our knapsack.

# Dynamic Programming

In those problems, we use DP to optimize our solution for time (over a recursive approach) at the expense of space. We store the solutions to sub-problems so we can use those solutions subsequently without having to recompute them.

# Optimizing for Space: Step 1

If you carefully consider the algorithm, however, you would notice that we are only making use of a very specific portion of the table at each step.

For instance, consider lines 4–13:

Notice how we only check for values in the preceding row. In other words, if we are currently at row 15, we only need the values in row 14 . Rows 1–13 are completely irrelevant (they *were* relevant back when the row immediately following them was being populated, but they instantly lost their relevance after the fact).

Therefore, we only really need to maintain 2 rows at any time: one to represent the previous row, and another to represent the current row.

This immediately reduces our space complexity from *O(NM)* to *O(M)*.

# Optimizing for Space: Step 2

At this point, the eagle-eyed reader would be like:

“Do we really need 2 rows?” (If that’s you, great job! It took me forever to understand this optimization, even while reading working code, so if you spotted this instantly, that’s really awesome.)

Actually, you don’t! If you read lines 7–13 carefully (of the snippet above), you would realize that we are using only a specific portion of the preceding row. Specifically, we are only using the part that is *directly above and to the left *(i.e. column number ≤ current column number).

Therefore, it is possible to maintain *only 1 row* (i.e. a 1-dimensional array) throughout the algorithm.

## Left to Right, or Right to Left?

To make that work, we cannot build our row from left to right. Doing so overwrites information that we would subsequently require. At every step in computing the value of a cell in a row, we require the values of cells in the preceding row *but only to the left.*

Therefore, we must build our row from right to left. This ensures that the leftmost value (which would potentially be required until the very last computation) is preserved until it is no longer required.

# Less Talk, More Code

It might be tricky to visualize the explanation without reading code. Here is the above solution re-written after both optimization steps.

It really helps to run through the algorithm in an IDE step by step — that’s how I really internalized this nifty trick. Honestly, this optimization isn’t very significant, since we are generally more concerned with time complexity, but it’s really cool nonetheless.