The Defining Methods of Combinatorial Optimization

Ali Rahman
Parallel AI
Published in
9 min readFeb 9, 2020

Combinatorial Optimization is the process of finding an optimal solution within a finite set of possible solutions (1). During the search process, each solution is evaluated and at the end of the search, the solution with the best value is returned.

Many real-world problems can be formulated in terms combinatorial optimization problems. For example, finding the best route to go from home to school, packing all items into the car to maximize the space utilization and avoid multiple trips to the supermarket, finding the optimal schedule to attend all classes in college and still have time for football, and so on.

Combinatorial problems are generally very difficult to solve due to the huge amount of possible combinations that can form a valid solution (1).

To better understand what I mean, let’s consider Bit the robot:

Bit is a robot that likes to wander. His internal memory doesn’t allow him to store too much information, and it often happens that he forgets his way back home. Indeed, Bit stored the name of all locations he visited during his walks, and the number of steps he made between each pair of locations since he has a limited number of steps he can make with a full charge of his battery. Now, Bit wants to get back home for dinner without draining all his battery, so he starts walking back when he realizes that is not that simple. There are a lot of possible ways he can choose from to get back home, and each way with a possibly different amount of steps.

Indeed, Bit can start from either location L1, L2, or L3. Once he made his choice, let’s say location L2, Bit can choose again among other three possible locations, namely L4, L5, and L6 for a total of 3 x 3 = 9 different combinations just for the first pair of locations (i.e., L1 -> L4, L1 -> L5, L1 -> L6, L2 -> L4, …). So, how many choices would Bit have considering all the locations? If Bit can choose among three different locations at each step and there are n steps, this leads us to 3^n combinations (3 to the power of n). For example, with 10 steps, Bit would have to choose among 3^10 = 59049 different combinations. Even if Bit is a clever robot and can compute the cost of one route from starting location to home in half a second, it would still take 29524.5/3600 =8.2 hours for him to decide where to go!

Bit is dealing with a combinatorial optimization problem. Luckily for us — and for Bit — a lot of research has been put into clever methods that avoid exploring all the possible combinations, for example, by compromising the optimal solution with quicker, suboptimal answers.

How do combinatorial optimization solvers work?

Combinatorial optimization solvers are capable of finding solutions to problems that have hundreds of thousands of possible combinations. In some cases, they can do this in just a matter of a few seconds. The idea is to avoid brute force exploration of all possible solutions; but rather to guide the search towards “promising” combinations by a careful analysis of the search space.

There are two main components that allow solvers to make “intelligent” decisions:

  1. The objective function, a method used to evaluate the quality of a given solution. For example, Bit the robot can evaluate how good a route is by simply summing up all the distances between pairs of locations on that route;
  2. The algorithm used to combine choices together and build solutions. In our example, this is the algorithm that Bit is using to choose the next location to go to at each step on his way back home.

These two components are the semantics and the logic behind solving the problem. To get a better understanding of what it looks like, let’s first look at how these components translate a combinatorial problem into an input that is understandable by a solver: variables, constraints, and objective function (2).

Variables

Variables are the modeling “entities” of the problem, i.e., objects that combined together and given the right values represent a solution to the problem. A variable can take only the values that are specified by the variable’s domain (i.e., a set of values). Therefore, a solver systematically assigns values to the variables until all variables are assigned, and a solution is found. For example, Bit could use 10 x 10 = 100 variables to model its 10 locations problem. Each variable Vij would represent a path for a location i to another location j and would have a domain with two values: zero and one. If the variable has value one, the path from location i to location j would be in Bit’s route, otherwise it wouldn’t be considered as part of his route.

Constraints

Constraints limit the set of possible combinations that variables can take in order to be a solution. Propagating constraints during the search process means pruning assignment of values to variables that violate the constraints. For example, Bit can use a constraint to avoid going from city i to city j if he already went from city j to city i or vice-versa:

Vij + Vji <= 1

Bit can also constraint the route to visit only 10 cities:

Vij = 10 for i, j = 1, 2, …, 10

This simply means that the sum of the values of all variables must be 10, exactly the number of cities (i.e., only 10 variables must have assigned a value of 1).

And so on with other constraints.

Objective function

We already said that the objective function is used to determine the quality of a solution. But what does the “quality” of a solution mean exactly? In the context of combinatorial optimization this means finding the minimum (or maximum) value among a set of possible values calculated from an assignment of values to variables. For example, if Bit has a function Steps that taken a location i and a location j returns the number of steps that Bit has to make to go from i to j:

Steps(i, j) -> number of steps from i to j

number of steps from location i to location jBit’s objective function would look something like the following:

minimizeVij * Steps(i, j) for i, j = 1, 2, …, 10

since Bit wants to minimize the number of steps to get back home to avoid consuming all his battery. Notice that the variables can have 0 or 1 value, therefore the objective function minimizes the sum of the number of steps on a route.

How Combinatorial Optimization problems are solved

Exact Methods

Exact methods employ algorithms that solve an optimization problem to optimality. In other words, if Bit uses an exact method to find the way back home, he is sure that the route he computes has the least amount of steps among all possible routes. Exact methods solve combinatorial optimization problems by enumerating candidate solutions and systematically discardings assignments of variables that cannot improve the best solution found so far by the algorithm. The set of candidate solutions is built at run time as a rooted search tree in which each node represents a choice that the algorithm can make.

This technique is frequently used in specialized solvers such as IBM Cplex or Gurobi, to perform branch and bound, i.e., they explore branches of the search tree and check each branch against the lower and upper bound of the best solutions found so far. If a branch cannot improve the current best solution, the entire subtree will be discarded and the search will continue from another branch. The difference between the upper bound (i.e., the best solution found so far) and the lower bound (i.e., the minimum of the best possible values found in a subtree) is called the optimality gap. When this gap is zero, the optimal solution is found and the process can terminate. Note that the initial upper bound doesn’t need to be calculated directly the solver but it can be given as input. For example, it can be calculated using other optimization techniques (see “Combined Solutions” later on).

Despite the success of exact methods, combinatorial problems are often so large and complex that other strategies must be used.

Local Search Strategies

Local search strategies are based on the idea of iteratively improving a candidate solution by incremental changes in order to reach another solution that is hopefully better than the previous one. The process continues until a certain timeout, or if it is not possible to make any further improvement. These methods do not guarantee to find the optimal solution but are generally very fast and can produce relatively good results. In fact, a lot of research has been put in local search strategies especially because they are quite often easily parallelizable.

For example, the hill climbing algorithm is an iterative procedure that starts from an arbitrary solution, performs some small changes, and accepts those changes only if they improve the current objective. For example, if Bit wants to use hill climbing to find his way back home, he would start from a random assignment of variables and, at each step, it would select a random variable and switch its value from 0 to 1 or vice-versa. If this switch improves the current objective value, Bit would keep the change, otherwise he would not try another random switch.

The simplicity of this algorithm makes it very popular as a first choice strategy to solve optimization problems. It is very fast and several copies of the same algorithm can run in parallel while a master process can keep collecting the best solution among them. The drawback of this algorithm is that it can get stuck in a place where no change can make an improvement for a long period of time (i.e., a local minima). In these scenarios, more advanced algorithms that use some sort of memory, or sophisticated heuristics can be used to provide better results in a shorter amount of time.

Methods like Simulated Annealing, Tabu Search, and Large Neighborhood Search have been successfully applied to solve a variety of optimization problems, from large-scale aircraft trajectory planning problems to vehicle routing.

Combined Solutions

After reading about local search strategies and exact methods, you may wonder: “what happens if I use the former to find an initial solution and improve it with the latter?”

The answer to this question is that it usually works, as proven by hybrid methods (3). Hybrid methods combine different techniques creating effectively some sort of optimization pipelines: the output of a solver serves as the input to another solver.

These combinations of solvers don’t need to be strictly “sequential”. For example, metaheuristics (4) are generally implemented as a parallel portfolio of solvers running different algorithms. The idea is to run strategies in parallel and choose the one that is most suitable for a given problem. Basically, it is like having many copies of Bit the robot running around the map and collecting results from each of them.

These methods can perform very well (especially on some particular problems) but they require a good amount of work in setting up all the infrastructure to connect together algorithms and integrate solvers since these are often black-box solutions.

Conclusion

In conclusion combinatorial optimization is a process that deals with very complex problems found in many real-life scenarios. Optimization solvers are able to solve instances of problems with thousands of variables in a few hours (and sometimes minutes) but, despite these promising results, a lot of research is still ongoing to find new ways to get to faster solutions. With larger optimization problems and use cases (e.g., the retail industry becoming more pervasive), machine learning techniques relying heavily on decision making support (e.g., autonomous vehicles and path planning, drone delivery, etc.), and real-time optimization (e.g., routing, real-time fleet rescheduling), there is a growing need for optimization systems that are able to handle huge amounts of data in real-time, combining multiple strategies in an easier way and, hopefully, help Bit get back in time for dinner!

References

  1. Computing in Combinatorial Optimization, by William Cook
  2. Handbook of Constraint Programming, by Francesca Rossi
  3. Investigating Constraint Programming and Hybrid Methods for Real World Industrial Test Laboratory Scheduling, by T. Geibinger at al.
  4. Metaheuristic optimization: http://www.scholarpedia.org/article/Metaheuristic_Optimization

--

--

Ali Rahman
Parallel AI

Enchilada enthusiast, Cryptoassets, Economics, Harvard Alum, Dev, Bullish on Bitconnect. Contact: a1irahman@pm.me PGP: 2b416d140a1ed5a565a2a6831b241777431220cb