How to distribute your Keras Model utilizing tensorflow.js

by Josva Engmose Jensen (josva.engmose@me-ta.dk)

This is a follow up story from my other story about AI implemented in Clinicaltrials.gov.

We have been working with Tensorflow and Keras, which are some high-level
open source software libraries. We now know what we can do with
Keras models, but how do we benefit from this? We serve our model to
a web application! The final product will be a web application where you can write some text in a textbox or in this case two and then within a few milliseconds you will get a prediction from your text.

Since my last story I have been working with the preprocessing part. I have made it way simpler and more compact and I get the same results as in my old code. This is the modules you will need for my new and simple preprocessing:

import nltk
from nltk.stem import WordNetLemmatizer #base form conversion
import string
from string import digits
import re #regex
from tqdm import tqdm #progress bar
import json

As you can see, I no longer use modules like pandas and Tensorflow. These two libraries (especially Tensorflow) are consuming a lot of memory, so we will surely benefit from not using them. I will now show you my code for the new preprocessing:

def text_cleaner(original_text):cleaned_text = original_text.translate(str.maketrans(' ', ' ', string.punctuation))
cleaned_text = cleaned_text.translate(str.maketrans(' ', ' ', '\n')) # Remove newlines
cleaned_text = cleaned_text.translate(str.maketrans(' ', ' ', digits)) # Remove digits
cleaned_text = cleaned_text.lower() # Convert to lowercase
cleaned_text = cleaned_text.split() # Split each sentence using delimiter
lemmatizer = WordNetLemmatizer()
lemmatized_list=[]
for y in cleaned_text: # Looks at every word in list
z=lemmatizer.lemmatize(y)
z=lemmatizer.lemmatize(z,'v')
lemmatized_list.append(z)
return lemmatized_list

This function takes raw text as input and returns a chopped list were the elements are words which have been cleaned. Of course, this is not enough, we need to convert the words into integers. To avoid the use of Tensorflow we need to save a dictionary from text.Tokenizer() in our old code, which we later can load in. The code for saving dictionaries as .json files are given below:

with open('sum_dictionary.json', 'w') as dictionary_file:
json.dump(dictionary, dictionary_file)
with open('tit_dictionary.json', 'w') as dictionary_file:
json.dump(dictionary, dictionary_file)

These two dictionaries are now saved and needs to be in your project folder.

We now create two functions which loads in the two dictionaries, match the words from text input and pad the sequence so the lists have the length of 300, which is the correct shape for the input to our model.

def sumword_to_integer(word_list):with open('sum_dictionary.json', 'r') as dictionary_file:
dictionary_sum = json.load(dictionary_file) # Loads dict
MAXLEN = 300tokenized_sum = [0]*MAXLEN # List with 0’s of length 300
input_list = text_cleaner(word_list)
word_to_int = [dictionary_sum[word] for word in input_list if word in dictionary_sum]tokenized_sum = [0]*(MAXLEN-len(word_to_int)) + word_to_intreturn [tokenized_sum]def titword_to_integer(word_list):with open('tit_dictionary.json', 'r') as dictionary_file:
dictionary_tit = json.load(dictionary_file)
MAXLEN = 300tokenized_tit = [0]*MAXLEN
input_list = text_cleaner(word_list)
word_to_int = [dictionary_tit[word] for word in input_list if word in dictionary_tit]tokenized_tit = [0]*(MAXLEN-len(word_to_int)) + word_to_intreturn [tokenized_tit]

There is only one last step before we can feed our input to our model, and we now create a final function for this:

def ready_for_model(Title, Summary):X_pred_sum = sumword_to_integer(Summary)
X_pred_tit = titword_to_integer(Title)
return X_pred_sum, X_pred_tit

The input is now ready for our model, but before we can make predictions we need to load in our model. We are using a lot of AWS Services and one of them is AWS Lambda. We have been using this a lot beforehand, but unfortunately it has its limitations. For loading in our model, we need the Tensorflow library. But when we zip our python script and dependencies we exceed AWS limit of 50MB zip file and 250MB unzipped. This means that we need a way around Tensorflow. In my searching for this I came across something called tensorflow.js, which is a JavaScript library for training and deploying ML models in the browser. This does not consume a lot of memory space like Tensorflow and it has been increasingly used among developers in the past few years.

This means that our preprocessing code made in Python needs to be converted into a .js script, so we can use tensorflow.js and load our model into a web application.

I will not do a big effort in explaining my JavaScript code, as it basically does the same as my python script. The .js code for preprocessing will look like this:

function text_cleaner(original_text){var cleaned_text = original_text.replace(/[.,\/#!$%\^&\*;:{}=\-_`~()]/g,""); // Remove punctuation
cleaned_text = cleaned_text.replace(/\s{2,}/g," ");
cleaned_text = cleaned_text.toLowerCase(); // Convert to lowercase
cleaned_text = cleaned_text.replace(/(\r\n\t|\n|\r\t)/gm, ""); // Remove newlines
cleaned_text = cleaned_text.replace(/\d/g,''); // Remove digits
cleaned_text = cleaned_text.split(' '); // Split each sentence using delimiter
var lemmatized_list = [];
let z;
for (let y in cleaned_text){ // Looks at every word in list
if (cleaned_text[y] !== ""){ // Skips every blank
z=lemmatizer(cleaned_text[y]);
lemmatized_list.push(z);
}
}
return lemmatized_list;
}
function sumword_to_int(word_list){const MAXLEN = 300;var tokenized_sum = Array(MAXLEN).fill(0);var input_list = text_cleaner(word_list);
var word_to_int = [];
for (let word in input_list){
if(dict_s[input_list[word]]){
word_to_int.push(dict_s[input_list[word]]);
}
}
tokenized_sum.length = tokenized_sum.length-word_to_int.length;
const sum_result = tokenized_sum.concat(word_to_int);
return sum_result;
}
function titword_to_int(word_list){const MAXLEN = 300;var tokenized_tit = Array(MAXLEN).fill(0);var input_list = text_cleaner(word_list);
var word_to_int = [];
for (let word in input_list){
if(dict_t[input_list[word]]){word_to_int.push(dict_t[input_list[word]]);
}
}
tokenized_tit.length = tokenized_tit.length-word_to_int.length;
const tit_result = tokenized_tit.concat(word_to_int);
return tit_result;
}

Like with our python code, we create a final function that makes our text ready for our model:

function ready_for_model(Title, Summary){var X_pred_sum = [sumword_to_int(Summary)];
var X_pred_tit = [titword_to_int(Title)];
return [
X_pred_sum, X_pred_tit
];
}

Now that we have the preprocessing in place in the right language, we need to look at how we import a Keras model into tensorflow.js

Importing a Keras model into tensorflow.js is divided into 2 steps. First, we need to convert the model to TF.js Layers format and then load it into TensorFlow.js.

Step 1: Convert Model to TF.js Layers format

We can use the model.save() function for saving our model, and this will typically be as a .h5 format. This will save both the model structure, layers and weights. (https://js.tensorflow.org/tutorials/import-keras.html) For converting the model we run the following code in the command prompt:

# Anaconda Prompt
tensorflowjs_converter --input_format keras
my_model.h5
.

After my_model.h5 you should specify the path to your target dir, in our case we just place it in the same path and therefor the “.”. Hopefully you now see some new files in your directory path. You should see a model.json file and some other files containing the weights. The number of these files depends on how big your model is and how many layers it has. In our case it created 2 files called “group1-shard1of2” and “group1-shard2of2”.

Step 2: Load the model into Tensorflow.js

I followed this tutorial https://js.tensorflow.org/tutorials/import-keras.html but the code in their step 2 would never work in this setup.

Instead I found a Webpack Frontend Starter kit and cloned it (https://github.com/wbkd/webpack-starter). The first thing you need to do is run:

npm install

This will install a lot of packages and you will see a new folder in your project called “node_modules” and two package.json files as well.

All the preprocessing code we have made is put into the index.js file, and in the top of the file I have included following:

import '../styles/index.scss';
import dict_s from '../scripts/sum_dict';
import dict_t from '../scripts/tit_dict';
import {lemmatizer} from "lemmatizer";
import * as tf from '@tensorflow/tfjs';

The sum_dict and tit_dict is the 2 dictionaries I created from my training data and I also located them in the same folder as the index.js file.

The next thing we need is to load our model and this is done in an async function in JavaScript:

async function loadingModel() {
return new Promise (
(resolve, reject) => {
tf.loadModel('./public/model.json')
.then(function (res) {
resolve(res);
})
.catch(function (error) {
reject(error);
});});
};

We must think about what our final product should be, and the smartest way is to first load in our model when we start our web application and then we can start making predictions. This means that we don’t have to load our model every time we want to make a prediction. This code will do the trick, but there is a bit more to it.

function readyDom() {
(async () => {
model = await loadingModel();
document.getElementById('app').classList.add('initialized');
})();
document.getElementById("button").addEventListener("click", function(){
compute_data();
});
};

As you can see from my code I have an element “button”, in my index.html I have created a button and given it an id=”button”. I will provide a screenshot of the final web application in the end of this post, so you can see the button. Following code should be included as the last thing in your index.js file, it would listen to an event in the DOM, which in this case is a click event.

document.addEventListener('DOMContentLoaded', readyDom, false);

The way we want our website to work is that you put in a summary text and a title in two different textboxes. Next you click on a button, which we named “Get Predictions”, and this click will then show you the probabilities (predictions) in a table. We need to generate this table and put the predictions inside it.

function generate_table(MyArray){var table = document.getElementById("table");// rows
for(var i = 0; i < 4; i++){
table.rows[1].cells[i].innerHTML = MyArray[i].toFixed(2)*100 + "%";
}} # This way we don't see any decimals
function compute_data(){
var sum_text = document.getElementById("summary").value;
var tit_text = document.getElementById("title").value;
var result = ready_for_model(sum_text, tit_text);
var tit_input = tf.tensor(result[0]);
var sum_input = tf.tensor(result[1]);
var Predictions = model.predict([sum_input,tit_input])
const values = Predictions.dataSync();
const arr = Array.from(values);
generate_table(arr);
}

We note that our result is an array of arrays and to get the array representing the summary text, we can just take to 0'th element of result. What is now different from python, is that we can’t just feed our model with these arrays because it is considered as regular js arrays. A Keras model only takes Tensors as input, and we therefor convert our js arrays to Tensors by using tf.tensor().

I will now show you how I have styled the table, textboxes and button. You can play around with this yourself and there are tons of different ways you can style and design these things. This is all done in our index.scss file, and it will look like this:

$body-color: black;body {
color: $body-color;
}
table {
border-collapse: collapse;
width: 50%;
}
th, td {
text-align: left;
padding: 8px;
}
tr:nth-child(even){background-color: #f2f2f2}th {
background-color: #4CAF50;
color: white;
}
.button {
background-color: #4CAF50; /* Green */
border: none;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
}
input[type=text], select {
width: 50%;
padding: 12px 20px;
margin: 8px 0;
display: inline-block;
border: 1px solid #ccc;
border-radius: 4px;
box-sizing: border-box;
}

Now we have our index.js and index.scss in place but of course we also need to do something with our index.html file. We have specified the two input text boxes, our “Get Prediction” button and our table, where the 4 types of interventional models are already specified. Our html file will look like this:

<html lang="en">
<head>
</head>
<body>
<h1>Put your text here:</h1>
Summary:<br>
<input type="text" id="summary" value="">
<br>
Title:<br>
<input type="text" id="title" value="">
<br><br>
<button id="button">Get Predictions</button>
<h1>Your Predictions:</h1>
<table id="table" border="1">
<tr>
<th style="text-align:center">Crossover Assignment</th>
<th style="text-align:center">Other Assignment</th>
<th style="text-align:center">Parallel Assignment</th>
<th style="text-align:center">Single Group Assignment</th>
</tr>
<tr>
<td style="text-align:center"></td>
<td style="text-align:center"></td>
<td style="text-align:center"></td>
<td style="text-align:center"></td>
</tr>
</table>
</body>
</html>

Now everything is ready, and to run your application as a localhost, you just navigate to your project path in an Anaconda prompt and type:

npm start

Here is a screenshot of our final product, where I have filled the textboxes with a summary and title text from a random study on clinicaltrials.gov and clicked on the button to get my predictions:

Now we can see that it works locally, but that is not very interesting. We want to make it accessible to others as well. We therefor run the following command:

npm run-script build

This command will now create a new build folder, which you will notice does not consume a lot of memory space. This folder contains everything you need
to distribute you Keras model into the platform of your interest e.g. Amazon S3 or Microsoft IIS. Your build folder should have a structure like this:

Files inside the build folder

That was it! Now you are ready to distribute your own Keras model! Try it out on our website, just scroll down to the buttom of the page.

If you want to go back to Part 1, then click her!

I hope you enjoyed my post, thanks for reading!

ME-TA

Josva Engmose Jensen

Written by

ME-TA

ME-TA

ME-TA is the innovative CRO that insists to add value through innovation.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade