How to do a Reduction
(No, we’re not making sauce)
Suppose we have a problem A that we want to solve. If we can find a way to turn A into some other problem B that we know how to solve, then our solution for A involves:
- Transforming problem A into problem B
- Solving B
- Transforming B’s solution to A’s solution
This whole process is itself an algorithm, and it’s called reduction.
Reduction is useful for solving problems that are similar because you can re-use algorithms, but also because it tells us something about the relative difficulty of problems. If we can easily transform an A problem into a B problem, then we know that if we can solve B, we can easily solve A, and thus B is at least as hard as A.
“Hard” in this sense refers to computational resources: time and space complexity, etc.
In computer science, we represent the A to B reduction with A ≤ B.
Sometimes we also subscript ≤ with the type of the reduction (p for polynomial reduction, for example, meaning that the reduction to and from B — the 1st and 3rd boxes — can be done in polynomial time).
Proving a reduction correct
We should be able to verify that this algorithm for solving A is correct. We must assume that our solution for B is correct (don’t reduce one problem you don’t know how to solve into another problem you don’t know how to solve), so then we just have to show that the work we do reducing to and from B is correct.
We want to prove that a correct solution to B gives a correct solution to A. (Why not the other direction? Let’s think about it — showing A is correct => B is correct doesn’t help our case at all, because we’d be assuming what we’re trying to prove is already correct.)
If that’s too hard (it’s often hard to prove things correct), we can try to prove the contrapositive instead: ~A => ~B. That is, if the solution to A is false then the solution to B is also false.
And that’s it! Isn’t it awesome that we’ve formalized being lazy??!