Optimization Modelling in Python: Multiple Objectives

Igor Shvab
Analytics Vidhya
Published in
4 min readMay 30, 2021

In two previous articles I described exact and approximate solutions to optimization problems with single objective. While majority of problems one can encounter in practice are indeed single-objective, multi-objective optimization (MOO) has its area of applicability in manufacturing and car industries.

In this article I show the difference between single and multi-objective optimization problems, and will give brief description of two most popular techniques to solve latter ones - ε-constraint and NSGA-II algorithms.

x1, x2, xj … x_n — coordinate search space of optimization problem.

x(x1, x2, xj … x_n) — candidate solution. A point in search space.

Fi — objective function value at x.

In the single-objective optimization problem, the superiority of a solution over other solutions is easily determined by comparing their objective function values. In multi-objective case one can’t directly compare values of one objective function vs another objective function. In this case the goodness of a solution is determined by dominance.

Solution x1 dominates x2 if:

  • solution x1 is no worse than x2 in all objectives
  • and solution x1 is strictly better than x2 in at least one objective

According to this definition, any set of solutions can be divided into dominated and non-dominated subsets. The non-dominated set of the entire feasible decision space is called Pareto-optimal or Pareto-efficient set. Pareto efficiency is a situation when one can not improve solution x with regards to Fi without making it worse for Fj and vice versa. In this set there is no one ‘the best solution’, hence user can choose any one solution based on business needs. Often Pareto-optimal solutions can be joined by line or surface. Such boundary is called Pareto-optimal front.

Both solutions B and C don’t dominate each other, and are Pareto optimal.

The goal of multi-objective optimization is to find set of solutions as close as possible to Pareto front. In the rest of this article I will show two practical implementations of solving MOO problems.

ε-constraint is a classical technique that belongs to methods of scalarizing MOO problem. Essentially scalarization methods try to reformulate MOO as single-objective problem somehow. For example, in the simplest approach multiple objectives are linearly combined into one overall objective function with arbitrary weights. Drawback of this approach is that one must have prior knowledge of each objective function in order to choose appropriate weights. In ε-constraint method we optimize only one objective function while restricting others within user-specific values, basically treating them as constraints. Lets consider following super simple linear example:

maximize F1 = x1

maximize F2 = 3x1 + 4x2

constraints: x1 <= 20

x2 <= 40

5x1 + 4x2 <= 200

We are going to solve this problem using open-source Pyomo optimization module. Code snippet is below.

In short:

  1. First we optimize F1 and F2 separately, just to know F2 values range during F1 optimization.
  2. Then we rewrite F2 function as a constraint: F2 - slack_variable == ε. Where ε is user defined value, and optional slack variable simply helps optimizer to better explore whole decision space. At this point whole problem has only one objective function F1 and one additional constraint.
  3. Finally iterate over all possible ε values from F2 values range, and optimize newly formulated problem at each step.

Pareto front for this simple linear MOO problem is shown in the picture above. Its worth pointing out that solutions most of the time are very unevenly distributed. For example for this particular problem many solutions are clustered in the lower right corner. In the next example I will show how to sample Pareto optimal solutions in order to yield diverse solution set.

In real world applications when objective functions are nonlinear or have discontinuous variable space, classical methods described above may not work efficiently. Heuristic methods such as genetic algorithm (GA) proved to be excellent alternatives to classical methods. In this section we will apply one of the most popular heuristic methods — NSGA-II (non-dominated sorting genetic algorithm) to nonlinear MOO problem. Specifically we will test NSGA-II on Kursawe test function.

Before delving into the code, worth pointing out that traditionally GA deals with binary vectors, i.e. vectors that consist of 0 and 1. In given example the solution vectors consist of decimals x(x1, x2, x3). While it is always possible to convert decimals to binary form, we still can apply same GA logic to usual vectors.

In evolutionary algorithms terminology solution vectors are called chromosomes, their coordinates are called genes, and value of objective function is called fitness. Amply commented python code is given at the bottom of the page. Here is brief algorithm description and objective function values plot.

--

--