Deploying a Simple Machine Learning App with Node.js and Watson ML

Give users real-time predictions; Watson ML for developers (part 3)

--

This article is a continuation of a series of posts introducing developers to machine learning and Watson ML.

In Part 1 I gave you an overview of machine learning and some of the tools you can use to build ML systems. In Part 2 I showed you how to build an end-to-end machine learning system using the IBM Data Science Experience, Spark ML, and Watson ML. Part 2 also showed you how to deploy your models to Watson ML and make predictions in real-time using a RESTful API.

In this blog post I’ll close the loop and show you how you can make real-time machine learning predictions from your end-user applications. I’ll walk through a sample Node.js application that predicts house prices based on the models we trained and deployed in parts 1 and 2.

Photo by Dawid Zawiła on Unsplash.

Sample application

The source code and instructions for the sample Node.js application can be found at https://github.com/ibm-watson-data-lab/watson-ml-scoring-demo. In order to run the application you will need a Watson Machine Learning instance provisioned in IBM Cloud and a deployed model based on the example provided in part 2.

If you haven’t read part 2 I highly recommend you start there. Part 2 shows you how to provision an instance of Watson Machine Learning, train a model in a Jupyter notebook using Spark ML, and deploy your model to Watson ML.

The sample application is a single web page with a Node.js backend. The GitHub repo provides step-by-step instructions for running the sample application locally, or deploying the application to the IBM Cloud.

The sample application provides a simple interface for predicting the price of a house where a user can enter the square footage, number of bedrooms, and color of a house:

You can try out the sample application at https://watson-ml-scoring-demo.mybluemix.net.

Clicking the submit button calls a JavaScript function in the browser, which in turn makes a RESTful call to the Node.js application. The Node.js application then calls your Watson ML scoring endpoint to predict the price of the house.

Watson ML scoring endpoints

Watson ML makes it easy to make real-time predictions against your deployed models through scoring endpoints. Scoring endpoints are RESTful APIs that can be called from any programming language.

No matter what programming language you use, the flow for calling your scoring endpoints is as follows:

  1. Request an access token from Watson ML using your Watson ML username and password.
  2. Build your payload using the features that you defined when building your model (i.e., “SquareFeet” and “Bedrooms”), and the values of the features you want to use for the prediction. Here is an example payload expected by the Watson ML scoring endpoint:
{
"fields": ["SquareFeet", "Bedrooms"],
"values": [squareFeet, numBedrooms]
}

3. Post the payload to your Watson ML model deployment using the access token you retrieved earlier in the Authorization header:

Authorization: Bearer TOKEN_GOES_HERE

4. Finally, parse the response from Watson ML. Here is a sample response:

{
"fields": ["SquareFeet", "Bedrooms", "features", "prediction"],
"values": [[2400, 4, [2400.0, 4.0], 137499.99999999968]]
}

The Watson ML user interface provides sample source code for calling your scoring endpoints from a few different programming languages as shown below:

Simplified scoring with Node.js

While working on the sample Node.js application I built a Node.js helper library that makes it even easier to call your scoring endpoints. You can see the helper library in action in the sample application, or integrate it with your own Node.js application.

Start by installing the library in your Node.js project:

npm install watson-ml-model-utils

Define your environment variables in a .env file:

WML_SERVICE_PATH=https://ibm-watson-ml.mybluemix.net
WML_USERNAME=
WML_PASSWORD=
WML_INSTANCE_ID=
WML_MODEL_ID=
WML_DEPLOYMENT_ID=

The GitHub repo README for the sample application shows you where you can find the values for your environment variables.

Import the WatsonMLScoringEndpoint class:

const { WatsonMLScoringEndpoint } = require('watson-ml-model-utils');

Create an instance of WatsonMLScoringEndpoint with the features used to train your model:

let endpoint = new WatsonMLScoringEndpoint(['SquareFeet', 'Bedrooms']);

Call the score method on the endpoint to make a prediction:

endpoint.score([2400, 4])
.then(response => console.log(response.prediction))
.catch(err => console.log(err));

The score method will return a promise, that when resolved will contain a response with the prediction returned from Watson ML:

137499.99999999968

You can also make multiple predictions in a single call to the scoring endpoint by calling the scoreMulti method:

endpoint.scoreMulti([[2400, 4], [2000, 3], [2600, 6]])
.then(response => console.log(response.predictions))
.catch(err => console.log(err));

In this case an array is returned with the three predictions:

[ 137499.99999999968, 87500.00000000276, 162500.00000000527 ]

And if you need the full response from Watson ML you can access that in the data property of the response.

Try it yourself!

The Node.js helper library will work with any deployed model in Watson ML. Simply pass in your features, and the values you are using for your prediction, and you are good to go.

If you followed parts 1 and 2 and deployed the sample Node.js application, then you are well on your way to building end-to-end machine learning systems and making real-time predictions from your end-user applications. In future articles we’ll slowly dig into other machine learning problems, deployment considerations, and much more. Stay tuned!

--

--