Solve LP models with DO for WML

AlainChabrier
4 min readDec 16, 2019

--

Decision Optimization for Watson Machine Learning allows to deploy and solve docplex and OPL models, but also supports the use of optimization models using legacy CPLEX file formats.

The general use of Decision Optimisation for Watson Machine Learning to deploy and run optimization models has been described in previous posts. A general introduction with a docplex example can be found in this post, and this other post describes the required changes in order to work with OPL. I also commented about the pros and cons of docplex vs OPL in this other post.

Differences between docplex/OPL and LP formats

A decision optimization problem is made of a model formulation and input data. The model formulations contain the definitions of the decision variables, constraints and objectives. A same model formulation can then be used with different sets of input data, e.g. to solve a production problem for a given week 1 and then for another week 2.

The main difference between docplex or OPL approaches, and others such as LP or SAV file formats, is about the separation of model and data. In a LP file, everything is included and the complete LP file will change from one optimization job to another.

An LP file will look like this:

Minimize
obj: 0.840000000000 Roasted_Chicken + 0.780000000000 Spaghetti_W__Sauce
+ 0.270000000000 Tomato,Red,Ripe,Raw + 0.240000000000 Apple,Raw,W_Skin
+ 0.320000000000 Grapes + 0.030000000000 Chocolate_Chip_Cookies
+ 0.230000000000 Lowfat_Milk + 0.340000000000 Raisin_Brn
+ 0.310000000000 Hotdog
Subject To
c1: 277.400000000000 Roasted_Chicken + 358.200000000000 Spaghetti_W__Sauce
+ 25.800000000000 Tomato,Red,Ripe,Raw + 81.400000000000 Apple,Raw,W_Skin
+ 15.100000000000 Grapes + 78.100000000000 Chocolate_Chip_Cookies
+ 121.200000000000 Lowfat_Milk + 115.100000000000 Raisin_Brn
+ 242.100000000000 Hotdog- Rgc1 = 2500
...

The flow of DO for WML is suited for this model and data separation, and supports the deployment of a model formulation which can then be used for several optimization jobs with different data sets.

DO for WML flow

Still, it is possible to use DO for WML with LP problems. The main change will be the need to include the new LP file for each new optimization job which is created. These jobs will be created on a empty deployment, i.e. not including any model formulation. A reference to the LP file has then to be passed by reference at job creation time.

In this post and in this sample notebook, I will also illustrate the use of REST APIs instead of WatsonMachineLearningClient python API.

The two main differences with respect to docplex or OPL are described in the following sections:

  1. create an empty deployement with the right do-cplex_12.9 model type,
  2. create jobs with reference to the input LP files, and to the output solutions files.

Create an empty deployment

Creating empy deployment is pretty easy, as you just need to skip the model formulation upload step.

With REST APIs, the model creation call looks like:

MY_MODEL = 'DietLP'
MY_TYPE='do-cplex_12.9'

log('Creating model ' + MY_MODEL)

payload = {
'name': MY_MODEL,
'description': MY_MODEL,
'type': MY_TYPE,
'runtime': {
'href': runtime
}
}

r = requests.post(wml_credentials["url"] + '/v4/models', headers=mywmlheaders, json=payload)
res = json.loads(r.text)
model_uid = res['metadata']['guid']

You need to use do-cplex_12.9 as model type so that runtime will look for LP or other basic CPLEX files to run.

The deployment call is like:

payload = {
'asset': {
'href': '/v4/models/'+model_uid+'?rev='+model_rev
},
'name': MY_MODEL,
'batch': {},
'compute': {
'name': 'S',
'nodes': 1
}
}
mywmlheaders['Content-Type'] = 'application/json'
r = requests.post(wml_credentials["url"] + '/v4/deployments', headers=mywmlheaders, json=payload)
res = json.loads(r.text)
deployment_uid = res['metadata']['guid']

Provide LP files by reference

There are different alternatives to provide LP files by reference. One of these is to use Cloud Object Storage (COS). In a production application, this would ensure the clean separation and communication between the optimization process, and the production application, through COS. The production application can push the LP files to COS and get solutions from COS. The optimization execution could even be triggered independently.

In order to use this alternative, you will need to create a COS instance and get some credentials. There is some free lite plan that can be enabled here.

COS also offers a console where you can create/browse the content of buckets, and upload and download content.

Cloud Object Storage console

The job creation code is then:

payload = {
'deployment': {
'href':'/v4/deployments/'+deployment_uid
},
'decision_optimization' : {
'solve_parameters' : {
'oaas.logAttachmentName':'log.txt',
'oaas.logTailEnabled':'true',
'oaas.resultsFormat': 'JSON'
},
'input_data_references': [
{
'id':'diet.lp',
'type': 's3',
'connection': {
'endpoint_url': cos_credentials["url"],
'access_key_id': cos_credentials["id"],
'secret_access_key': cos_credentials["key"]
},
'location': {
'bucket': cos_credentials["bucket"],
'path': 'diet1.lp'
}
}
],
'output_data_references': [
{
'id':'solution.json',
'type': 's3',
'connection': {
'endpoint_url': cos_credentials["url"],
'access_key_id': cos_credentials["id"],
'secret_access_key': cos_credentials["key"]
},
'location': {
'bucket': cos_credentials["bucket"],
'path': 'solution.json'
}
},
{
'id':'log.txt',
'type': 's3',
'connection': {
'endpoint_url': cos_credentials["url"],
'access_key_id': cos_credentials["id"],
'secret_access_key': cos_credentials["key"]
},
'location': {
'bucket': cos_credentials["bucket"],
'path': 'log.txt'
}
}
]
}
}
r = requests.post(wml_credentials["url"] + '/v4/jobs', headers=mywmlheaders, json=payload)
res = json.loads(r.text)
job_uid = res['metadata']['guid']

In this case, the solution will be pushed in json format to the same COS bucket, along with the complete log file.

WARNING: be careful that the name of the solution file should be solution.json to work (or solution.xml with XMLformat).

Complete notebook

You can see the complete notebook here. The notebook also includes some example of REST API calls to read/write from COS.

It is currently not possible to pass the LP files inline in the job creation payload.

It is possible to create a new deployment for each of the LP files, but this would be obviously inefficient and unnecessary.

Alain.chabrier@ibm.com

@AlainChabrier

https://www.linkedin.com/in/alain-chabrier-5430656/

--

--

AlainChabrier

Former Decision Optimization Senior Technical Staff Member at IBM Opinions are my own and I do not work for any company anymore.