# Optimization Modeling in Python: PuLP, Gurobi, and CPLEX

I have been involved in the design, development, and implementation of operations research (OR) and optimization models such as Linear Programs (LP), Mixed Integer Linear Programs (MILP), and Quadratic Programs (QP) for more than a decade. In the past four years, I have realized the importance of OR solutions (i.e., software solutions that are based on optimization models) for solving these kinds of programs. In the past, we used to model a real-world optimization problem with LP/MILP packages in isolation such as GAMS, AMPL, OPL, or others, then solve it with an optimization solver (such as CPLEX, Gurobi, Mosek, Xpress, etc.) and give the optimal result to managers and decision makers. Thus, optimization models were traditionally designed for use in strategic/tactical decisions rather than operational ones.

These days, however, many in industry want to plan and make optimal decisions regularly as a part of their hourly, daily, or weekly operations. Recent computational advances have provided the infrastructure for us to incorporate optimization models in analytic software solutions. This means that today’s OR practitioners need to design, model, and implement robust software engines that are based on LP/MILP models. They need to utilize a programming language such as C++, Java, C#, Python, etc. for that purpose.

A good and popular programming language recommended by many in the OR and Data Science communities is Python. It is easy, flexible, and powerful, and has great libraries for Machine Learning, Optimization, and Statistical Modeling. In this blog, I’ll focus on how one can use Python to write OR models (LPs/MILPs).

Many optimization solvers (commercial and open-source) have Python interfaces for modeling LPs, MILPs, and QPs. Here I’ve selected CPLEX and Gurobi, since they are among the leading commercial solvers, and PuLP, which is a powerful open-source modeling package in Python. I’ll provide a side-by-side tutorial for each of these packages, and I hope it will help you to easily translate your model from one to another. Here, we use `gurobipy`

(Gurobi’s Python API), `docplex`

(the IBM Decision Optimization CPLEX Modeling package for Python), and `pulp`

(an LP/MILP modeler written in Python). For the purpose of this post, I’ll assume that you are familiar with Python, i.e., you know how to install and use Python packages and use Python data structures like lists, tuples and dictionaries. I’ll also assume basic knowledge of linear programming, mixed integer programming, and constrained optimization.

Now let’s dive in to optimization modeling with Gurobi, CPLEX, and PuLP. As a quick review, an optimization model is a problem which has an objective (or a set of objectives in multi-objective programming), a set of constraints, and a set of decision variables. The following is a simple optimization model:

In the above optimization example, ** n, m, a, c, l, u** and

**are input parameters and assumed to be given. In order to write Python code, we set these parameters as follows:**

*b*import randomn = 10

m = 5

set_I = range(1, n+1)

set_J = range(1, m+1)

c = {(i,j): random.normalvariate(0,1) for i in set_I for j in set_J}

a = {(i,j): random.normalvariate(0,5) for i in set_I for j in set_J}

l = {(i,j): random.randint(0,10) for i in set_I for j in set_J}

u = {(i,j): random.randint(10,20) for i in set_I for j in set_J}

b = {j: random.randint(0,30) for j in set_J}

Now it’s time to implement our OR model in Python. When we want to code an optimization model, we put a placeholder for that model (like a blank canvas), then add its elements (decision variables and constraints) to it. Here is how:

**Gurobi**

import gurobiby as grbopt_model = grb.Model(name="MIP Model")

**CPLEX**

import docplex.mp.model as cpxopt_model = cpx.Model(name="MIP Model")

**PuLP**

import pulp as plpopt_model = plp.LpProblem(name="MIP Model")

After this step, we have a **Model Object **named ** opt_model. **Next, we need to add decision variables. It is standard to store decision variables in Python dictionaries (or Pandas Series) where dictionary keys are decision variables, and values are decision variable objects. A decision variable is defined with three main properties: its type (continuous, binary or integer), its lower bound (0 by default), and its upper bound (infinity by default). For the above example, we can define decision variables as:

**Gurobi**

# if x is Continuous

x_vars ={(i,j):opt_model.addVar(vtype=grb.GRB.CONTINUOUS,

lb=l[i,j],

ub= u[i,j],

name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}# if x is Binary

x_vars = {(i,j):opt_model.addVar(vtype=grb.GRB.BINARY,

name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}# if x is Integer

x_vars ={(i,j):opt_model.addVar(vtype=grb.GRB.INTEGER,

lb=l[i,j],

ub= u[i,j],

name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}

**CPLEX**

# if x is Continuous

x_vars =

{(i,j): opt_model.continuous_var(lb=l[i,j], ub= u[i,j],

name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}# if x is Binary

x_vars =

{(i,j): opt_model.binary_var(name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}# if x is Integer

x_vars =

{(i,j): opt_model.integer_var(lb=l[i,j], ub= u[i,j],

name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}

**PuLP**

# if x is Continuous

x_vars = {(i,j):

plp.LpVariable(cat=plp.LpContinuous,

lowBound=l[i,j], upBound=u[i,j],

name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}# if x is Binary

x_vars = {(i,j):

plp.LpVariable(cat=plp.LpBinary, name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}# if x is Integer

x_vars = {(i,j):

plp.LpVariable(cat=plp.LpInteger,

lowBound=l[i,j], upBound= u[i,j],

name="x_{0}_{1}".format(i,j))

for i in set_I for j in set_J}

After setting decision variables and adding them to our model, it is time to set constraints. Any constraint has three parts: a **left-hand side** (normally a linear combination of decision variables), a **right-hand side** (usually a numeric value), and a **sense** (Less than or equal, Equal, or Greater than or equal). To set up any constraints, we need to set each part:

**Gurobi**

# <= constraints

constraints = {j :

opt_model.addConstr(

lhs=grb.quicksum(a[i,j] * x_vars[i,j] for i in set_I),

sense=grb.GRB.LESS_EQUAL,

rhs=b[j],

name="constraint_{0}".format(j))

for j in set_J}# >= constraints

constraints = {j :

opt_model.addConstr(

lhs=grb.quicksum(a[i,j] *x_vars[i,j] for i in set_I),

sense=grb.GRB.GREATER_EQUAL,

rhs=b[j],

name="constraint_{0}".format(j))

for j in set_J}# == constraints

constraints = {j :

opt_model.addConstr(

lhs=grb.quicksum(a[i,j] * x_vars[i,j] for i in set_I),

sense=grb.GRB.EQUAL,

rhs=b[j],

name="constraint_{0}".format(j))

for j in set_J}

**CPLEX**

# <= constraints

constraints = {j :

opt_model.add_constraint(

ct=opt_model.sum(a[i,j] * x_vars[i,j] for i in set_I) <= b[j],

ctname="constraint_{0}".format(j))

for j in set_J}# >= constraints

constraints = {j :

opt_model.add_constraint(

ct=opt_model.sum(a[i,j] * x_vars[i,j] for i in set_I) >= b[j],

ctname="constraint_{0}".format(j))

for j in set_J}# == constraints

constraints = {j :

opt_model.add_constraint(

ct=opt_model.sum(a[i,j] * x_vars[i,j] for i in set_I) == b[j],

ctname="constraint_{0}".format(j))

for j in set_J}

**PuLP**

`# Less than equal constraints`

constraints = {j : opt_model.addConstraint(

plp.LpConstraint(

e=m(a[i,j] * x_vars[i,j] for i in set_I),

sense=plp.plp.LpConstraintLE,

rhs=b[j],

name="constraint_{0}".format(j)))

for j in set_J}

# >= constraints

constraints = {j : opt_model.addConstraint(

plp.LpConstraint(

e=plp.lpSum(a[i,j] * x_vars[i,j] for i in set_I),

sense=plp.LpConstraintGE,

rhs=b[j],

name="constraint_{0}".format(j)))

for j in set_J}

# == constraints

constraints = {j : opt_model.addConstraint(

plp.LpConstraint(

e=plp.lpSum(a[i,j] * x_vars[i,j] for i in set_I),

sense=plp.LpConstraintEQ,

rhs=b[j],

name="constraint_{0}".format(j)))

for j in set_J}

Next step is defining an objective, which is a linear expression. Here is how we can define an objective:

**Gurobi**

`objective = grb.quicksum(x_vars[i,j] * c[i,j] `

for i in set_I

for j in set_J)

**CPLEX**

`objective = opt_model.sum(x_vars[i,j] * c[i,j] `

for i in set_I

for j in set_J)

**PuLP**

`objective = plp.lpSum(x_vars[i,j] * c[i,j] `

for i in set_I

for j in set_J)

Now we need to add an objective to our model:

**Gurobi**

# for maximization

opt_model.ModelSense = grb.GRB.MAXIMIZE# for minimization

opt_model.ModelSense = grb.GRB.MINIMIZEopt_model.setObjective(objective)

**CPLEX**

# for maximization

opt_model.maximize(objective)# for minimization

opt_model.minimize(objective)

**PuLP**

# for maximization

opt_model.sense = plp.LpMaximize# for minimization

opt_model.sense = plp.LpMinimizeopt_model.setObjective(objective)

Finally, we call the solver to solve our optimization model. In PuLP, the default solver is CBC, but it can work with other solvers as well. Here is the final step in solving our model:

**Gurobi**

`opt_model.optimize()`

**CPLEX**

# solving with local cplex

opt_model.solve()# solving with cplex cloud

opt_model.solve(url="your_cplex_cloud_url", key="your_api_key")

**PuLP**

# solving with CBC

opt_model.solve()# solving with Glpk

opt_model.solve(solver = GLPK_CMD())

Now we are done. We just need to get results and post-process them. I’ve found that the Pandas package is a good data processing library. If the problem is solved to optimality, we can get and process results as follows:

import pandas as pdopt_df = pd.DataFrame.from_dict(x_vars, orient="index",

columns = ["variable_object"])opt_df.index =

pd.MultiIndex.from_tuples(opt_df.index,

names=["column_i", "column_j"])opt_df.reset_index(inplace=True)# Gurobi

opt_df["solution_value"] =

opt_df["variable_object"].apply(lambda item: item.X)# CPLEX

opt_df["solution_value"] =

opt_df["variable_object"].apply(lambda item: item.solution_value)# PuLP

opt_df["solution_value"] =

opt_df["variable_object"].apply(lambda item: item.varValue)opt_df.drop(columns=["variable_object"], inplace=True)

opt_df.to_csv("./optimization_solution.csv")

Here, `opt_df`

is a Pandas dataframe that holds the optimal values of each decision variable. We can also save these results in a CSV file as shown above.

We only covered high-level modeling in Python, but all of the above packages contain useful functions and data structures that should be considered when you write production-ready code. For example, in Gurobi, you can add a set of variables at once using `opt_model.addVars()`

, whereas in CPLEX it’s `opt_model.continuous_var_dict()`

, `opt_model.binary_var_dict()`

, or `opt_model.integer_var_dict()`

, and in PuLP it can be done with `plp.LpVariable.dicts()`

.

If you found this useful, you’ll probably enjoy checking out this post on tips and tricks to improve OR models, a neat experiment where we applied optimization to machine learning, or some notes on applying Gurobi in the real world.

_________________________________________________________________

*If you liked this blog post, check out more of **our work**, follow us on social media (**Twitter**, **LinkedIn**, and **Facebook**), or join us for our **free monthly Academy webinars**.*