Deploy Keras model on GCP and making custom predictions via the AI Platform Training & Prediction API

Vincentweimer
Dec 19, 2019 · 4 min read

This tutorial will show how to train a Keras model locally using Colab and after showing how to deploy this model to the Google Cloud Platform (GCP) and serve predictions from this model.

The first part will walk through training a model using the Sequential API locally using Google Colab. The second part will guide you through the steps that are necessary to deploy the model to GCP and get predictions from this model using the AI Platform Training & Prediction API.

Dataset

This tutorial will be using the Boston housing price regression dataset, which is a dataset within Keras. This dataset contains information collected by the U.S Census Service concerning housing in the area of Boston Mass.

Preparing the dataset

Within the Colab notebook, the dataset needs to be imported, the necessary libraries need to be imported and the data will be scaled between 0 and 1 to make better predictions.

### Import Python Libraries ###
# Import Scientific libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Import MinMax Scaler
from sklearn.preprocessing import MinMaxScaler
# Import Keras Libraries
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten

Next, the dataset needs to be imported

# Import Housing Dataset
from keras.datasets import boston_housing
(X_train, y_train), (X_test, y_test) = boston_housing.load_data()

After the dataset has been imported, the input data will be scaled between 0 and 1 to have a common scale.

# Normalize Input Data
scaler = MinMaxScaler(feature_range = (0,1))
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

Building and compiling the model

After the data has been preprocessed, a model in Keras using the Sequential API can be build to make predictions about the housing prices. The goal of this tutorial is to show the steps how to deploy the model onto GCP so a simple model will be build, compiled, and trained locally. After, local predictions will be made before the tutorial moves on to the second part, which explains in detail how to deploy your model to GCP.

### Build and Compile model #### Build modelmodel = keras.models.Sequential([
keras.layers.Dense(8, activation='relu', input_shape=(13,)),
keras.layers.Dense(4, activation='relu'),
keras.layers.Dense(1)])
# Compile modelmodel.compile(loss='mean_squared_error', optimizer='adam')# Fit model for trainingh = model.fit(X_train, y_train, epochs=50, batch_size=16,
validation_split=0.1, verbose=0)

After the model has been trained locally, predictions for the first 5 data points of X_test will be made.

X_new = X_test[:5]
y_pred = model.predict(X_new)
# Print predictionsprint(y_pred)
print(y_test[:5])

Predictions:
[[11.743999]
[19.949255]
[22.95396 ]
[24.749311]
[22.283283]]

Measured values:
[ 7.2 18.8 19. 27. 22.2]

Part 2: Saving and deploying the model to GCP

In this part, the steps to deploy the model to GCP will be explained in detail. I’ve found tutorials online how to train models in GCP and make predictions, but haven’t come across how to save and deploy your Keras model from a Jupyter or Colab notebook. The steps that will be done how to depoy the model are the following:

  1. Create Google account
  2. Create new project
  3. Create bucket
  4. Activate necessary API’s
  5. Install Google Cloud SDK to work with gcloud and gsutil
  6. Authenticate so that Colab can communicate with GCP
  7. Export saved model to a directory in GCP
  8. Create a model version
  9. Send prediction via .json file
  10. Get prediction back via the ML engine API

Create Google Account

After having trained your model locally, it needs to be deployed to GCP in order to make predictions via the API. To start with, an account needs to be set up in GCP. If you don’t have an account yet, you need to setup your account here. If you are a new user, you will get 300$ free credits for one year.

Install Google Cloud SDK to work with gcloud and gsutil

If you want to work with the command line, you need to install the Cloud SDK. This is a set of command tools, which include gsutil, gcloud, and bq that gives you the possibility to access Compute Engine, Cloud Storage, and other Google products. the documentation to install the SDK can be found here.

Authenticate so that Colab can communicate with GCP

When you have compiled and trained your model locally you need to add the following code to your notebook so that Colab can have access to GCP.

import sysif 'google.colab' in sys.modules:  from google.colab import auth as google_auth  google_auth.authenticate_user()else:  %env GOOGLE_APPLICATION_CREDENTIALS ''

Create new project and Create bucket

After installing the SDK, you need to create a project and a bucket. A bucket is part of Google Cloud Storage and in here your model can be deployed and stored. You can set up your project and bucket via 2 ways: Via the command line and via the web UI. Via the command line it can be done as follows:

  1. Give a name for the bucket:
BUCKET_NAME="your_bucket_name"

2. Select a region:

REGION=us-central1

3. Create the bucket. The bucket will be created with the variables that you have specified before:

gsutil mb -l $REGION gs://$BUCKET_NAME

Export saved model to a directory in GCP

After the bucket and project have been created you need to save your model and then can export it to a directory in GCP with the following code:

export_path = tf.contrib.saved_model.save_keras_model(model, JOB_DIR + '/your_directory')print("Model exported to: ", export_path)

Create a model name and model version

Your deployed model in GCP needs to have a name and a version in order to make predictions on AI Platform. You can do this with the following code:

MODEL_NAME = "your_model_name"
MODEL_VERSION = "your_created_version"
! gcloud ai-platform versions create $MODEL_VERSION \--model $MODEL_NAME \
--runtime-version 1.13 \
--python-version 3.5 \
--framework tensorflow \
--origin 'your_directory_in_GCP'

Once this has been done you can send input data via a .JSON file to GCP and receive predictions back via the API. You can make predictions with the following commands:

! gcloud ai-platform predict \--model $MODEL_NAME \
--version $MODEL_VERSION \
--json-instances 'your_file'.json

After the API should return back predictions from you locally trained model in Google Colab. I hope this tutorial was helpful and informative to first train a model locally in Colab and then deploy it to Google Cloud to get predictions back!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade