Photo by Roberto Nickson on Unsplash

Evaluating Operations Research Solvers

Berk Orbay
berk-orbay
Published in
4 min readDec 12, 2022

--

Which OR solver is better? Short answer: “Whatever gets the job done”. Long answer: “It depends”. It mainly depends on the requirements, convenience and the budget.

This post is about how to determine metrics to choose your solver. We will focus on MILP (Mixed Integer Linear Programming) solvers as the discussion applies to other problem types as well.

“What is a solver?”

For the uninitiated: A solver is the algorithm to give the solution to a model with parameters, decision variables and constraints. If you are into AI or Deep Learning topics, you may think of the solver as the Tensorflow/Keras or Pytorch of OR. It is neither a good definition nor a good comparison but it should give you an image.

There are many good commercial solvers out there for a various of OR tasks. There are also some good open source solvers (“all open source solvers are good solvers”).

Mittelmann Benchmarks

You simply cannot ignore Hans D. Mittelmann if you are writing about solvers. Mittelmann Benchmarks is a highly regarded reference when you compare performance of different solvers. So, it is an excellent start to understand .

Simply put, there is a set of mathematical models (MIPLIB2017) which needs to be solved by solvers. Let’s review standard MILP benchmark (link). Each solver is run on the computational environment with almost exact conditions (“The following codes were run with a limit of 2 hours on an Intel i7–11700K, 8 cores and 8 threads, 64GB, 3.6Ghz”).

Each solver is evaluated by the number of problems they can solve and their solution time. If a solver cannot solve a problem in the set, then maximum time is assigned to that solver.

At the current state (2022–11–13), you may see the results. Unscaled/scaled computation times and number of solved problems are displayed. Gurobi takes the lead in the current table.

Vanity Metrics

Well, actually… there is no vanity metric. But the perspective is each benchmark needs to apply a set of criteria to compare and rank its constituents. But a metric becomes a vanity metric for you, if it not relevant to your needs.

Some of those reasons are

  • MIPLIB2017 problem set might be harder than your problem at hand
  • Your time constraints might be more relaxed
  • Your computers might be more powerful
  • You have some API requirements and development effort is important to you
  • There are other solvers not included in this benchmark (CPLEX, XPress, even OR-Tools)
  • Some solvers might not have adequate licenses or convenient way to deploy to cloud
  • You might need professional support
  • Some problems are easier to solve with some “hyperparameter” tuning

Budget is also something to consider. Some of those solver license fees are not cheap (for the most cases, rightly so).

How to choose?

Simple. Suppose you are looking for a solver for a project (side project or work project) with some requirements (e.g. “It should have a Python API”).

  1. Start with the most convenient open source solver. Try to solve your problem. If it works try again by artificially making your problem larger. Perform a “stress test”. If it works, you are fine. If not, go to step 2.
  2. Get trial licenses from some of the highly performant commercial solvers. Conduct your own tests with your own problems and specs and benchmark.

Hybrid Way

This part is more of a thought. Some commercial solvers have cloud interfaces which allows you to be charged by the time the solver is used. It is probably significantly more expensive than using a license on your own hardware for full time.

If your project allows it, you may start with an open source solver and use the solution it finds as an input to the commercial solver to get a better solution in a short amount of time.

However such systems need development effort and it might not be suitable for you.

Conclusion

Good thing is there are plenty of options (both open source and commercial) to choose from. For open source solvers, they are increasingly performant and “free”. For commercial solvers, you get tons of performance, additional features and support. Benchmarks are cool, but they might not reflect what you need.

At some point, choosing a solver gets down to “whatever works” criterion. But doing some research and testing prior to selecting a solver tool always pays off.

--

--

Berk Orbay
berk-orbay

Current main interests are #OR and #RL. You may reach me at Linkedin.