Add Machine Learning to Parse Platform through Oracle Autonomous DB

Corrado De Bari
6 min readNov 21, 2022

--

A modern MBaaS platform must offer advanced analytics and machine learning capabilities to mobile applications in an easy way, like any other APIs that developers can use during development and runtime execution. One of these MBaaS, the Parse Platform, currently doesn’t offer integrated machine learning APIs.

In this paper I’ll show you how much it is easy to use a Machine Learning model exposed as a microservice by the Oracle Autonomous DB in a Parse server, in order to make classifications/regressions on documents stored during creation or update phase.

The Oracle DB allows to create and expose a machine learning models as you can read here. These models can be applied not only on data stored in the DB, but they can be managed and exposed as a microservice and used to do inferences on data sent in the request body. This could be done with on-premises DBMS or in Cloud with the Oracle Autonomous DB. In this paper we will refer for semplicity to ADB.

It isn’t my intention to talk about how to create a ML model. I’ve talked in the past about that in my blog articles posted in Oracle AI & Data Science Blog, but you can also find more recent labs in Oracle Live Labs, like “Get started with Oracle Machine Learning Fundamentals on Oracle Autonomous Database”.

I’ll show you first how to get the url references and the requested fields we need to make a REST call to the model.

In the Autonomous DB console:

Autonomous DB — instance console

select the Database Actions menu. In the result page:

Autonomous DB — Database Actions page

click on menu “ORACLE MACHINE LEARNING RESTFUL SERVICES” and copy the DB_HOSTNAME in the URL under “ALL OML Service REST APIs…”.

In the same page, we have the link to access the OML workspace where we’ll get the rest of informations related to the ML model.

Link to ML Workspace

In the OML workspace, we can finally select the model you will call as REST service, already deployed by the data scientist, in our example, the “NN_CLAS_MODEL”:

Open API Specification

Store the “path” link, the “/affinity/score” in the snapshot, we will use as the ML_MODEL part of the URL to set the mlFunctionUrl in the configuration file dbconfig.json:

{
"authUrl": "https://[DB_HOSTNAME]/omlusers/api/oauth2/v1/token",
"password": "*********",
"username": "[DB_USER]",
"mlFunctionUrl": "https://[DB_HOSTNAME]/omlmod/v1/deployment/[ML_MODEL]"
}

The DB_HOSTNAME has been got previously.

From Parse platform point-of-view, we will use the Cloud functions, a Cloud Code built into the Parse Server that can process documents stored in the platform. We will use one of the Save Triggers APIs, the Parse.Cloud.beforeSave() function, that allows developer to register an action on a specific document collection that will be executed before saving the single instance of a Parse Object.

In this Cloud code we will call the Machine Learning algorithm exposed by the Autonomous DB, sending the content of a Parse Object before saving it. The result it will be saved together with the object instance in a new field.

Let’s explain how to arrange the three modules we need from the MBaaS side:

  • express.js: Parse server startup Node.js script;
  • dbconfig.json: config file with URL and credentials;
  • main.js: Cloud Function script;

express.js

It’s a way to start a Parse server with a Node.js script, in which we can configure the startup parameters, in our case we focus on the reference to the cloud code, as shown in the following Express based example:

var express = require('express');
var ParseServer = require('parse-server').ParseServer;
var app = express();

var api = new ParseServer({
databaseURI: 'mongodb://localhost:27017/dev', // Connection string for your MongoDB database
cloud: './cloud/main.js', // Path to your Cloud Code
appId: 'APPLICATION_ID',
masterKey: 'MASTER_KEY', // Keep this key secret!
restApiKey: 'REST_API_KEY',
fileKey: 'optionalFileKey',
serverURL: 'http://localhost:1337/parse' // Don't forget to change to https if needed
});

// Serve the Parse API on the /parse URL prefix
app.use('/parse', api);

app.listen(1337, function() {
console.log('parse-server-example running on port 1337.');
});

where you can see the property cloud: ‘./cloud/main.js’ that refers to the location and the name of Cloud Function main scripts.

main.js

Let’s analyze in details the example code to exploit an ML model on Autonomous DB to update with an inference result a new document, during its saving phase:

main.js

The function beforeSave() intercepts an Object saving event regarding a new or an existing one, in our case in the collection “Customers”:

Parse.Cloud.beforeSave(“Customers”, (request) => {..

You can choose to update it with the score result whenever you modify any document fields, that could be alter the previous result of a classification/regression. I choose to check if field score exists before proceed in the call, to do the classification only when document is created.

Parse.Cloud.httpRequest(tokenRequest).then(function(httpResponse) {..

The Parse.Cloud.httpRequest() Parse API allows to make the first of two requests you must do to the OML REST service: the first to get an authentication access token valid for 3600 secs to be used on the actual microservice call. I’ve avoided to make code too complex, but the best practice it is to check if token has expired checking time passed before to request again an access token.

There is a specific url to get the access token:

https://[DB_HOSTNAME]/omlusers/api/oauth2/v1/token

where DB_HOSTNAME it’s the only part that changes and we have already taken previously on Database Actions console.

The second request, that will use an access token, needs to pass all the Object fields except the CUST_ID, that in our case it’s the document key, not involved in the prediction model:

let obj = request.object.toJSON(); 
delete obj.CUST_ID;

In this case the objects in the Customers collection have the same fields on which it has been trained the model. Not always we are in this favourable condition. The delete process must be done for all the fields not included in the ML model training. But how could we get all the field for sure needed by the model? It’s pretty straightforward. In the OML workspace, after selecting the model as done previously to get the URL, click on “NN_CLAS_MODEL” link, as shown in the following snapshot:

Model Metadata

You can access in this way to the list of attributes that must be passed to the model, with related types.

In request payload of the “options” variable we will specify the list of records to be scored, because models could score a set of records. In our case it will be just one, “obj”, intercepted before to be saved:

...
body: {
inputRecords: [ obj ],
“topN”: 1,
“topNdetails”: 1
}
...

Model will returns not only the classification class (label), but also the probabilities of memberships:

console.log(result.scoringResults[0].classifications[0].label); 
console.log(result.scoringResults[0].classifications[0].probability);

dbconfig.json

This is a way to separate credentials and sensitive info from mains.js code. Another way could be use environment variables passed to the docker container in which it will run in production the Parse server:

It’s time to check if everything works. After the Parse server startup, with following command:

 #node express.js 

we can create a new object in class Customers through a REST call like this:

and then we will check if it has been added a “score” field with the class resulting by the classification model:

In this paper we have seen how to extend the Parse platform with Machine Learning capabilities leveraging Oracle Machine Learning available in the Oracle DB. Next we will explore smart ways to feed the Oracle DB with incoming Parse Objects created by mobile clients to use them as dataset elements for Machine Learning model training.

Disclaimer

The views expressed in this paper are my own and do not necessarily reflect the views of Oracle.

--

--