# Column Generation Techniques

Column Generation decomposition techniques have been under the spotlights in the last years as way to solve huge mixed integer program (MIPs). While the decomposition theory behind it might appear complex, the application of column generation techniques can be very easy in practice.

In this post, I introduce the technique with the common cutstock example, showing how to implement it with `docplex`

and solve it with `CPLEX`

in Watson Studio. Then I show how it can be also applied to solve to optimality some well known Vehicle Routing Problem (VRP) Solomon instances.

# Column Generation Introduction

In few words, Column Generation is described in wikipedia as *“the idea to generate only the variables which have the potential to improve the objective function”*.

The problem being solved is split into two problems: the **master problem** and the **sub-problem**. The master problem is the original column-wise formulation of the problem with only a subset of variables being considered. The sub-problem is a new problem created to identify a new promising variable. The objective function of the sub-problem is the reduced cost of the new variable with respect to the current dual variables, and the constraints require that the variable obeys the naturally occurring constraints.

In addition to reducing the number of columns in the MIP, in many cases, the decomposition also has the great advantage that the constraints naturally decompose in **master constraints** (applying between columns) and **local constraints** (only applying within the the sub problem). We will see examples later.

This decomposition methods relies on the theory of the Dantzig-Wolfe decomposition.

Column generation (and the more complex case of **Branch-and-Price **where columns are not only generated at the root node) have been the topic of my PhD thesis, with a special emphasis of the cases where the sub problem is a shortest path problem. You can read my PhD thesis (in french). You can also refer to the two following articles summarizing my work at the time:

*Vehicle Routing Problem with Elementary Shortest Path based Column Generation*, Alain Chabrier, . Computers & Operations Research, 33(10):2972–2990, 2006 (PDF).*Solving a Network Design Problem*

# Cutstock introductory example

The cutstock example has been used for years as an introduction to the Column Generation techniques.

## Problem description

The problem is to find the optimal way to cut rolls of a given width into sets of of items. Rolls can be cut using any possible pattern of the different types of items.

Here is the sample data with 5 types of items.

`item_sizes = [20, 45, 50, 55, 75]`

item_demands = [48, 35, 24, 10, 8]

roll_width = 110

The optimal solutions requires 47 rolls distributed within 5 different patterns.

`pattern [1, 2, 0, 0, 0] is used 18 times.`

pattern [0, 0, 2, 0, 0] is used 8 times.

pattern [0, 0, 0, 2, 0] is used 5 times.

pattern [1, 0, 0, 0, 1] is used 8 times.

pattern [3, 0, 1, 0, 0] is used 8 times.

## Standard formulation

A very straightforward formulation would be to consider all the rolls that can be cut (using a guessed upper bound, such as `max_cuts = 100`

), and use an integer variable for each roll and each type of item stating how many items of of this type should be cut in this roll.

A constraint per type of item would ensure the required items are cut.

A constraint per roll would ensure the maximal roll width is respected.

An objective would minimize the number of used rolls. This objective is a bit more complex to express and require for each roll to introduce a binary variable on whether or not the roll is used.

A complete formulation would be:

from docplex.mp.model import Modelcuts = range(max_cut)mdl = Model('Custstock')cut_vars = mdl.integer_var_matrix(cuts, items, lb=0, ub=10, name="cuts")

used_vars = mdl.binary_var_dict(cuts, name="used")# Objective

cost = mdl.sum( used_vars[c] for c in cuts)

obj = mdl.minimize( cost )# link used and cuts

mdl.add_constraints( mdl.sum(cut_vars[c,i] for i in items) <= 100*used_vars[c] for c in cuts)# Cover demand

mdl.add_constraints( mdl.sum(cut_vars[c,i] for c in cuts) >= item_demands[i] for i in items)# Roll width

mdl.add_constraints( mdl.sum(item_sizes[i] * cut_vars[c,i] for i in items) <= roll_width for c in cuts)mdl.print_information()

mdl.solve(log_output=True)

mdl.report()

Resulting in the following log:

Model: Custstock

- number of variables: 600

- binary=100, integer=500, continuous=0

- number of constraints: 205

- linear=205

- parameters: defaults

- problem type is: MILP

WARNING: Number of workers has been reduced to 1 to comply with platform limitations.CPXPARAM_Read_DataCheck 1

CPXPARAM_Threads 1

Tried aggregator 1 time.

MIP Presolve modified 400 coefficients.

Reduced MIP has 205 rows, 600 columns, and 1600 nonzeros.

Reduced MIP has 200 binaries, 400 generals, 0 SOSs, and 0 indicators.

Presolve time = 0.00 sec. (1.85 ticks)

Found incumbent of value 100.000000 after 0.02 sec. (3.51 ticks)

Probing time = 0.00 sec. (0.13 ticks)

Tried aggregator 1 time.

Reduced MIP has 205 rows, 600 columns, and 1600 nonzeros.

Reduced MIP has 200 binaries, 400 generals, 0 SOSs, and 0 indicators.

Presolve time = 0.00 sec. (1.94 ticks)

Probing time = 0.00 sec. (0.13 ticks)

Clique table members: 100.

MIP emphasis: balance optimality and feasibility.

MIP search method: dynamic search.

Parallel mode: none, using 1 thread.

Root relaxation solution time = 0.00 sec. (2.36 ticks)

Nodes Cuts/

Node Left Objective IInf Best Integer Best Bound ItCnt Gap

* 0+ 0 100.0000 0.0000 100.00%

0 0 17.0833 80 100.0000 17.0833 292 82.92%

* 0+ 0 54.0000 17.0833 68.36%

* 0+ 0 53.0000 17.0833 67.77%

0 0 18.0000 85 53.0000 Cuts: 43 387 66.04%

0 0 18.0000 87 53.0000 Cuts: 151 653 66.04%

0 0 24.4573 99 53.0000 Cuts: 102 986 53.85%

0 0 44.2436 68 53.0000 Cuts: 151 1135 16.52%

0 0 45.2628 57 53.0000 Cuts: 102 1298 14.60%

0 0 45.5000 39 53.0000 Cuts: 48 1466 14.15%

* 0+ 0 52.0000 45.5000 12.50%

0 0 45.5000 26 52.0000 Cuts: 43 1541 12.50%

0 0 45.5000 32 52.0000 Cuts: 29 1602 12.50%

* 0+ 0 49.0000 45.5000 7.14%

* 0+ 0 47.0000 45.5000 3.19%

0 2 45.5000 26 47.0000 45.5000 1602 3.19%

Elapsed time = 0.25 sec. (103.04 ticks, tree = 0.01 MB, solutions = 6)

646 393 cutoff 47.0000 45.5000 10268 3.19%

1011 595 45.5000 38 47.0000 45.5000 14940 3.19%

1240 163 45.9545 20 47.0000 45.5000 17727 3.19%

1311 173 45.5000 35 47.0000 45.5000 19211 3.19%

1411 223 45.5000 50 47.0000 45.5000 22203 3.19%

1508 303 45.9519 34 47.0000 45.5000 26702 3.19%

Implied bound cuts applied: 131

Flow cuts applied: 27

Mixed integer rounding cuts applied: 302

Zero-half cuts applied: 15

Gomory fractional cuts applied: 5

Root node processing (before b&c):

Real time = 0.25 sec. (103.05 ticks)

Sequential b&c:

Real time = 1.97 sec. (1764.86 ticks)

------------

Total (root+branch&cut) = 2.22 sec. (1867.91 ticks)* model Custstock solved with objective = 47

The solution can then be displayed using something like:

`for c in cuts:`

if used_vars[c].solution_value>EPS:

p = [ cut_vars[c,i].solution_value for i in items]

print (f"pattern {p} is used.")

The number of variables is roughly the number of rolls to cut multiplied by the number of items (plus one for the binary variable representing roll usage). While the problem is not so big, it can be difficult and long to solve to optimality due to symmetries.

## Pattern Column Formulation

Another, may be more natural, formulation for this problem is to associate an integer variable for each possible roll cutting **pattern**. Then a constraint per item will sum the variables corresponding to patterns including this item and ensure the demand is covered.

This formulation is very natural and the definition of the variables, constraints and objective is very simple.

The potential problem is the number of possible patterns/variables. With the sample data, this number is still quite small as only some hundreds of patterns are possible, but with real life instances, the number of patterns can quickly grow to millions which becomes hard to solve if all variables are given to the engine at start.

## Column Generation Formulation

Column Generation techniques are possible as the Simplex method only requires some of the variables initially, and the addition of columns that can potentially improve the solution can be delayed. In addition, the simplex method theory of **reduced cost** allows to define a criterion on whether a new pattern may or not improve the current solution.

The initial problem with a limited subset of columns is known as the **master problem** while the problem of generating new columns is known as the **sub problem**. In this cutstock case, the sub problem is a simplex knapsack problem.

The master problem can even be initialized with dummy variables with costs which are higher than standard patterns that will be generated later.

from docplex.mp.model import Modelmaster_mdl = Model('Master-Cutstock')

dummy_vars = master_mdl.continuous_var_dict(items, lb=0, ub=max_cut, name="dummy_cut")

cost = master_mdl.sum( 1000*dummy_vars[i] for i in items)

obj = master_mdl.minimize( cost )

cts = master_mdl.add_constraints( dummy_vars[i]>= item_demands[i] for i in items)

master_mdl.solve()

master_mdl.report()

After a initial execution of the master problem, the **dual values** for the demand constraints can be accessed:

`duals = master_mdl.dual_values(cts)`

These dual values can be used in the sub-problem in charge of generating new pattern with negative (minimization) reduced cost:

`sub_mdl = Model('Sub-Cutstock')`

item_vars = sub_mdl.integer_var_dict(items, ub=999999, name='items')

sub_mdl.add_constraint(sub_mdl.sum(item_sizes[i]*item_vars[i] for i in items) <= roll_width)

sub_mdl.minimize( 1 - sub_mdl.sum(duals[i]*item_vars[i] for i in items))

sub_mdl.solve()

sub_mdl.report()

The Column Generation process will then loop on master and sub problem until no new pattern with negative reduced cost can be generated. The `docplex`

Python package include a column-wise API to create new variables over existing constraints.

if sub_mdl.objective_value<-EPS:

new_pattern = [item_vars[i].solution_value for i in items]

print (f"New pattern {new_pattern}")

patterns.append(new_pattern)

new_var = master_mdl.continuous_var(lb=0, ub=max_cut, name="cut_{0}".format(len(patterns)))

cut_vars.append(new_var)

cost += new_var

for i in items:

cts[i].lhs += new_pattern[i] * new_var master_mdl.solve()

master_mdl.report()

duals = master_mdl.dual_values(cts)

else:

break

This complete notebook with both formulations can be run in Watson Studio with the free environment..

# Vehicle Routing Problem (VRP)

Another well known application of Column Generation is the Vehicle Routing Problem. For Column Generation to work, the decomposition must lead to a sub problem which corresponds to a class of problem where efficient algorithms exist to find optimal solutions. In the case of VRP, the sub-problem is a Shortest Path problem.

Column Generation methods have lots of applications where the sub problem is a constrained shortest path problem. In addition to VRP, let’s just comment network design, crew pairing, or multi commodity flow problems in general. You can see some of these applications illustrated in my PhD thesis.

## VRP introduction

The VRP problems can be very hard to solve with MIP solvers as the relaxation of the basic formulation is very weak. A fleet of vehicle has to visit a given set of customers. At each customer, some stuff has to be picked and/or delivered and vehicles have a limited capacity. Customers can only be visited within some given time window. The objective is to minimize the number of vehicles used, and then the total distance traveled.

The most studied VRP instances are the Solomon instances. Some work has been done on optimal solutions and proving it is optimal), and some other work has been done on **heuristics **(quickly finding good solution, without any gap or proof of optimality).

When resource constraints and time windows apply on individual vehicle routes, the shortest path problem cannot be solved with a basic Dijsktra algorithm, and requires more complex Shortest Path with Resource Constraints and Time Windows (SPRCTW) algorithms. In particular because of the common use of this problem as sub-problem for Column Generation, quite some materials has been published around it. See for example this very well known publication. This is also a nice illustration of how some general constraint completely (vehicle capacity, time windows, etc) fit into the sub-problem local constraints and some other (customer coverage) fit into the master problem global constraints.

## Implementation in Watson Studio

Following the similar schema from the cutting stock example, it is possible to implement a Column Generation algorithm to solve Solomon VRP instances.

For the path sub-problem, I used `DiGprah`

from the `networkx`

package. I did not quickly found a good SPRCTW algorithm so I wrote a simple version of my own. It is not perfect and there are lots of possible improvements, but it does the job. You can see the code in this notebook.

The column generation iteration works exactly as with cutstock problem except that the sub problem is a shortest path problem instead of a knapsack problem.

`paths = [] `

paths_vars = []

EPS = 0.0001

n = 0

**while** **True**:

n +=1

n_paths = len(paths)

print (f'Run **{n}** with **{n_paths}** paths.')

*# Update graph with duals*

**for** c **in** customers:

G.nodes[c]['weight'] = - duals[customers.index(c)]

sp.solve(source, target)

**for** l **in** sp.labels[target]:

path = l.get_path()

path.remove(source)

path.remove(target)

rc = l.weight

distance = l.res[0]

**if** rc<-EPS:

print (f"Adding reduced cost **{rc}** for new path **{path}**")

paths.append(path)

new_var = master_mdl.continuous_var(lb=0, ub=1, name="path_**{0}**".format(len(paths)))

paths_vars.append(new_var)

cost += distance*new_var

**for** c **in** path:

cts[customers.index(c)].lhs += new_var

**if** len(paths)>n_paths:

master_mdl.solve()

master_mdl.report()

duals = master_mdl.dual_values(cts)

**else**:

**break**

At the end, the solution can be displayed.

`Model: Master-Solomon`

- number of variables: 681

- binary=0, integer=681, continuous=0

- number of constraints: 50

- linear=50

- parameters: defaults

- problem type is: MILP

* model Master-Solomon solved with objective = 362.400

path ['customer_5', 'customer_3', 'customer_7', 'customer_8', 'customer_10', 'customer_11', 'customer_9', 'customer_6', 'customer_4', 'customer_2', 'customer_1'] is used 1.0 times.

path ['customer_32', 'customer_33', 'customer_31', 'customer_35', 'customer_37', 'customer_38', 'customer_39', 'customer_36', 'customer_34'] is used 1.0 times.

path ['customer_13', 'customer_17', 'customer_18', 'customer_19', 'customer_15', 'customer_16', 'customer_14', 'customer_12'] is used 1.0 times.

path ['customer_20', 'customer_24', 'customer_25', 'customer_27', 'customer_29', 'customer_30', 'customer_28', 'customer_26', 'customer_23', 'customer_22', 'customer_21'] is used 1.0 times.

path ['customer_43', 'customer_42', 'customer_41', 'customer_40', 'customer_44', 'customer_46', 'customer_45', 'customer_48', 'customer_50', 'customer_49', 'customer_47'] is used 1.0 times.

You can have a look and reuse the complete notebook.

## Branch-and-price

Some of the Solomon instances have an integer optimal solution ot the relaxed problem, and no more complex branch-and-price algorithm implementation (where new columns are generated, we call this *priced,* during the branch-and-cut search). The use of such more complex algorithm is the topic of my PhD thesis on Constrained Shortest Path based Branch-and-Price. Using these I could find an exact solution to 17 instances of the Solomon benchmark suite which were previously open at that moment.

# Conclusions

For more details, you might look at:

- this powerpoint presentation from Desrosiers, a pioneer of these techniques at GERAD.
- A new optimization algorithm for the vehicle routing problem with time windows M Desrochers, J Desrosiers, M Solomon Operations research 40 (2), 342–354
- Branch-and-Price: Column Generation for Solving Huge Integer Programs Cynthia Barnhart & Ellis L. Johnson & George L. Nemhauser & Martin W. P. Savelsbergh & Pamela H. Vance, 1998. Operations Research, INFORMS, vol. 46(3), pages 316–329, June.