Tips and Tricks for Improving OR Models

by Nikhil Kulkarni

Opex Analytics
The Opex Analytics Blog
9 min readJul 12, 2019

--

Operations research (OR) optimization models are often quite complex, thanks to the scale of operations in today’s business world, as well as the increasingly intricate nature of decision-making. With improvements in computing infrastructure and solver technology, solving OR problems (e.g. supply chain network design, vehicle routing, etc.) has gotten easier over time. However, even with more horsepower to tackle them, business problems have evolved to become increasingly complex as well.

The complexity of an OR model is mainly determined by (a) the number and type of variables, and (b) the number and type of constraints in the model. Consequently, complexity can be managed by keeping only the most important variables and constraints in a model. But even after rooting out all unnecessary variables and constraints, many practical business problems remain tough to solve to optimality in realistic time.

To overcome this, commercial solvers (e.g., Gurobi, CPLEX, FICO Xpress) provide hyperparameters to make their model implementations more efficient. In this blog, I’ll talk about how you can leverage a few great solver tricks to handle today’s complex problems.

(Specific solvers are mentioned in this blog due to my familiarity with them, but other solvers also have many of the capabilities discussed in this post.)

Threads and Parallel Executions

Threads and Jobs

The number of threads per job is very important for solver technology, especially for mixed integer programs (MIPs), as multiple threads allow for parallel processing of nodes related to integer variables. However, increasing the number of threads does not necessarily reduce solve time. In fact, sometimes an increasing number of threads actually increases solve time, since the processor has to spend time managing and synchronizing thread operations.

Another important hyperparameter is the number of solve jobs submitted simultaneously. In most default settings, the solver is free to choose any number of threads required to solve each job; however, the default setting is sometimes not the most efficient option.

For example, consider two large jobs submitted to the solver, which is operating on an eight-core machine. In this case, each job will consume all eight threads (where the number of threads is necessarily equal to the number of cores). Consequently, both jobs will compete with each other for the available memory and computer power of the eight cores, thus negatively impacting each other.

The best setting for number of threads depends on the scale and complexity of the model. The only way to know the ideal thread number for a model is by experimenting with a large subset of the problem. One can use the following approach to determine the optimal number of threads:

However, the optimal number of threads isn’t always fixed, even for a specific model. For example, let’s say our model has a period index in most or all of its decision variables. The scale of the model thereby increases when the number of periods in the data increase.

Suppose after experimenting with a subset of the data (with its own specific number of periods), the number of threads suitable for the model turns out to be eight. If the number of periods in the model changes, it’s best to check whether the same number of threads still works well. It’s important to ensure that what works for a high-complexity version of the problem works for a low-complexity version of the problem too.

Parallel Executions

Once an application is delivered to a client, it’s possible that multiple people submit jobs to the solver simultaneously. Let’s further caveat that the client’s business needs require some large solves and some smaller solves, each focusing on different variants of the problem. Since the hardware available for all these executions is shared, there is a meaningful possibility that several small executions may overlap with bigger solves, thereby delaying the whole set of jobs.

To combat this, you can set the maximum number of jobs to be solved simultaneously, and even tinker with the solver’s queuing process. Once set, when the max number of jobs is already running simultaneously, if another job is initiated, it goes into a queue, and is begun only when one of the running scenarios is complete.

As an example, suppose the following:

  • we have an eight-core Gurobi single-machine compute server license
  • we have eight or more cores in our hardware
  • we conducted one of the aforementioned threads experiment, which indicated that using four cores is optimal

If this is all true, then setting the number of jobs per application to two ensures that the two scenarios are run independently but with maximum efficiency.

Sharing the Memory Load

The other main reason that models take a long time to solve is a lack of sufficient memory. The entire solution process happens in the RAM of the server itself. However, even as much as 64 or 128 GB of memory can sometimes fall short for solving complex problems. That’s when hard drive storage comes in handy to help share the load.

All commercial optimization engines provide a way to share the memory load between RAM and HDD (hard disk drive) storage. Typically, there are specific hyperparameters that govern how nodes transfer data from RAM to HDD storage when their memory consumption crosses a certain limit.

For example, NodeFileStart is a parameter in Gurobi that lets us specify the maximum memory to be used for a node’s file storage in RAM. If the amount of memory required crosses this limit, then the node’s files get transferred to a local HDD in a predetermined folder.

The flip side of using this parameter (as opposed to increasing RAM capacity) is that each interaction with the HDD adds extra time to the overall solve process, as accessing RAM is much faster.

Concurrent LP Runs

Root relaxation is a key step in solving MIP problems, in which all the integer variables are relaxed of their integrality constraints, and then this easier version of the problem is solved. The objective value of this altered solution becomes the lower (upper) bound for the original minimization (maximization) problem.

A solver’s default behavior is to try several algorithms in parallel, and stop execution when one of those algorithms completes the root relaxation. This is a good approach to finding a suitable algorithm for a model in short time. However, for any given model, once the data’s scale stabilizes, the algorithm chosen by this process hardly changes. Once this data stabilization occurs, it makes sense to save time by restricting the algorithm chosen to the clear winner.

The screenshot below highlights the portions of a model execution log that show this behavior.

This is a log file from executions for one of the MIP models. The four marked parts show the following:

  1. To obtain a root relaxed solution, the solver tries multiple solvers simultaneously.
  2. Since the log file can show logs of only one solver, it chooses to show the logs of the barrier solver alone.
  3. The other solver has completed first, therefore interrupting the barrier log execution.
  4. The solve that completed first was primal simplex solver.

If you look closely at the last spot in the log, you’ll see that the optimizer recommends setting the method parameter to 3 to save concurrent solve time — that’s the parameter that controls the root relaxation algorithm choice in Gurobi. The CPLEX equivalent of this parameter is rootalg (in OPL), startalgorithm in Python/Matlab, and RootAlgorithm in C++/Java.

The following text from Gurobi’s website describing the method parameter is especially useful in understanding its potential impact:

Concurrent optimizers run multiple solvers on multiple threads simultaneously, and choose the one that finishes first. Method=3 and Method=4 will run dual simplex, barrier, and sometimes primal simplex (depending on the number of available threads). Method=5 will run both primal and dual simplex. The deterministic options (Method=4 and Method=5) give the exact same result each time, while Method=3 is often faster but can produce different optimal bases when run multiple times.

I suggest visiting this page on the Gurobi website to get a better understanding of this parameter.

Photo by Visual Tag Mx from Pexels

Numerical Issues

Numerical issues can also create serious efficiency problems. The execution logs show the numerical issues in the model. Here’s an example from one of the execution logs:

Let’s break this down a bit.

  • The matrix range refers to the matrix of coefficients of decision variables in all the model’s constraints.
  • The objective range is the range of coefficients in the objective function.
  • The bounds range represents the lowest and highest bounds across all variables.
  • The RHS range is the range of RHS (right-hand side) constants after moving all variables to the LHS (left-hand side) and all constants to the RHS.

While it may not always be true, it is likely that the higher these ranges are, the wider the solution space will be, meaning a greater amount of time is required to solve the problem.

Besides these four specific ranges, there are a few other circumstances in which numerical issues might pop up —while using Big M, and in the coefficients of objective functions.

Use of Big M

Big M is a very big constant, often used in MIP models as a multiplier of a boolean variable on the RHS of an equation. Using Big M in this way basically toggles the constraint on and off by making the RHS is either zero or a large number, respectively.

However, such use of Big M increases the solution space unnecessarily. To avoid it, the value chosen should be just more than the maximum possible value for the given variable/constraint. Using Gurobi’s/CPLEX’s Big M constants usually helps, since these solvers employ some kind of heuristic to arrive at a sensible Big M value.

(This is only one specific way to use Big M — see more on other uses at the Operations Research Stack Exchange.)

Coefficients of Objective Functions

Certain coefficients (i.e., penalties or rewards) assigned to components of objective functions are sometimes lower or higher than their natural real-world counterparts, potentially creating unnecessary numerical expansion.

For example, consider the concept of unsatisfied demand in an OR problem. A natural coefficient for a product’s unsatisfied demand is its price, as what someone is willing to pay for an item is representative of its demand. But what if the price of a product is not available, or we’re solving a cost-minimization problem and not a revenue-maximization problem? We’ll have to pick a number to represent a product’s price/unsatisfied demand, and we should be careful to not needlessly expand our numerical range.

However, we may come across situations where avoiding large numerical ranges is impossible. Parameters like IntFeasTol in Gurobi and epint in CPLEX help us navigate these situations. They allow the solver to set values of decision variables as integers if their true value lies within a given tolerance limit. For example, if such a parameter is set to 1e-5, then if the value of a certain decision variable is found to be between 12 +/- (1e-5), then variable is assumed to be 12, and further branching and bounding continues based on this assumption.

Summary

For most problems, default solver hyperparameter values work just fine, but that’s not always the case. Very hard problems, especially MIPs, require small tweaks to arrive at a solution quickly.

If you found this useful, you’ll probably enjoy this post on optimization modeling in Python, or some notes on applying Gurobi in the real world.

_________________________________________________________________

If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.

--

--