credits: Lexica AI

Building and Pushing Docker Images to Google Cloud Platform.

Iva @ Tesla Institute
Artificialis
Published in
7 min readFeb 3, 2023

--

Google Cloud Platform (GCP) is one of the primary options for the cloud-based deployment of ML models, along with others, such as AWS, Microsoft Azure, etc.

A container registry is a centralized place to store, manage and distribute docker images. By pushing an image to a registry, you can share it with others and easily deploy it to various environments.

We will cover the steps and provide insights into best practices to make this process seamless and efficient. Whether you are new to containers or an experienced developer, this guide will provide valuable information on running Docker images on the Google Cloud Platform.

this is schema of our steps: first we are going to build the python app, create a docker file, push it to the repo, build GCP instance and then push image to container ragistry. After we will pull the image on new GCP instance.

As seen in this schema, our first step is to create a python application. For this occasion, we will use the built-in Iris dataset from the Scikit-learn library. You can find it in the documentation:

Next, you copy and paste the code since the model is irrelevant as we are focusing more on the deployment rather than modeling steps.

I assume you already have a GCP account, if not you can easily create one and get 300$ credits, so you can spend it on an exploration of the platform and run some experiments.

Since we pasted that code, let’s run the Notebook in Vortex AI section. Click on the Workbench and Create New Notebook. We will add a few lines of code as well as import FastAPI.

The beginning of your code sample should look like this:

from fastapi import FastAPI
from fastapi.responces import StreamingResponse

app=FastAPI():
@app.get("/predict")
def get_y_predict():

The next line should be indented and the whole model should be passed, just the last line: plt.show() should be returned.

So we have our first step!

Before creating a Dockerfile, go ahead and create Github repository from the local IDE.

Name it ‘first_app’, just to make sure we are on the same page.

A Dockerfile is a script that contains instructions for building a Docker image. It is used to automate the process of creating a Docker container, which is a standalone executable package that includes everything needed to run a specific application, including the code, runtime, system tools, libraries, and settings. Dockerfile is the blueprint of how our application should run. Additionally, Dockerfiles can be version controlled and used in CI/CD pipelines to automatically build and deploy containers.

FROM python:3.9.11

COPY requirements.txt .

RUN pip install -r requirements.txt

RUN mkdir -p app

COPY ./app app

EXPOSE 80

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]

This code is a Dockerfile that outlines the steps to build a Docker image for a Python application.

  1. “FROM python:3.9.11” specifies that the base image for the Docker image is the latest version of Python 3.9.11.
  2. “COPY requirements.txt .” copies the “requirements.txt” file from the host system to the Docker image. This file contains the dependencies required for the application.
  3. “RUN pip install -r requirements.txt” runs the “pip install” command to install the dependencies listed in the “requirements.txt” file.
  4. “RUN mkdir -p app” creates a directory called “app” in the Docker image.
  5. “COPY ./app app” copies the “app” directory from the host system to the “app” directory in the Docker image. This directory contains the application code.
  6. “EXPOSE 80” declares that the Docker image will listen on port 80 for incoming connections.
  7. “CMD [“uvicorn”, “app.main:app”, “ — host”, “0.0.0.0”, “ — port”, “80”]” specifies the command to run when a container is started from this image. “uvicorn” is a fast and asynchronous web server for Python, and “app.main:app” specifies the main Python file and its entry point. “ — host” and “ — port” options specify the host IP and the port number to bind to.

In our requirements.txt file we will have all the required dependencies that our app needs to have in order to run. Let’s go ahead and write down:

scikit-learn
uvicorn
fastapi
matplotlib

Before we push this to Github, we have to run a few more commands. At this point we’d want to configure YAML file that will contain a few steps that gives the Cloud an information what should be done and in what order of execution:

steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/bert-fresh/first_app', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/bert-fresh/first_app']

This code is a YAML script for a CI/CD pipeline tool such as Google Cloud Build or GitLab CI/CD. It outlines the steps to build and push a Docker image to Google Container Registry (GCR).

  1. “name: ‘gcr.io/cloud-builders/docker’” specifies the builder image to use for this step. “gcr.io/cloud-builders/docker” is the Google Cloud Build Docker builder image which includes the Docker CLI.
  2. “args: [‘build’, ‘-t’, ‘gcr.io/bert-fresh/first_app’, ‘.’]” specifies the arguments passed to the Docker CLI. The arguments “build”, “-t”, “gcr.io/bert-fresh/first_app”, and “.” tell Docker to build a new image using the current directory as the build context and to tag the image with the repository name “gcr.io/bert-fresh/first_app”.
  3. “args: [‘push’, ‘gcr.io/bert-fresh/first_app’]” specifies the arguments passed to the Docker CLI. The arguments “push” and “gcr.io/bert-fresh/first_app” tell Docker to push the image with the specified repository name to the registry.

The next step is to enable Container Registry on GCP. That’s the place where our application will be pushed, more precisely, the containerized image of our app will be pushed and stored in the Container Registry and from there it can be used for something else.

Now, commit everything to the GitHub repository, and you’d have to have the file structure like this:

Once we have everything on GitHub, we’d go ahead and create a VM instance on GCP. So we go to Computer Engine>VM Instance and press + for creating a new instance. Leave everything as it is, just change the ‘Series’ to N1. Also, for Machine type, choose ‘N1-standard-4(4 vCPU, 15GB memory). Change the Boot disk OS to Ubuntu. Allow full access to all Cloud APIs. In the Firewall section Allow HTTP and HTTPS traffic. Click Create.

Next, we want to build a docker image on this instance, so we need to install Docker on VM Instance.

Go to Docker docs and pick Docker for Linux, select Ubuntu. You will need these commands:

Click SSH on VM instance. Connect. Install Docker. Run:

sudo docker run hello-world 

just to make sure you installed Docker properly. Then, clone the Github repo in the same SSH pop-up window:

git clone (paste the link of your github repo here)

Enter the current directory cd ..

Now we’d need the YAML file, so make sure you run this command properly:

gcloud builds submit --config cloudbuild.yaml . 

We can see the image being built. Make sure you don’t have any errors in YAML file. If you do, you’d going to have to change it, and when you do that, commit and push again, go back to the SSH on VM instance and remove the file:

sudo rm -r first_app

So this is just in case you have some errors in YAML file. Repeat the process until you have the desired output: until the image is pushed to GCP.

first_app should be in the Container registry Repository

You can change it from Private to Public if you wish.

So, how do we use this image?

We have to pull and run it using Docker:

gcloud auth configure-docker

Copy first_app folder in Container registry:

and run:

sudo docker run (paste the gcr.io/name of the project/first_app)

so the image is pulled. Now let’s see it in the browser: type:

sudo docker run -p 80:80 gcr.io/name of the project/first_app

The app results should be on port:

You can send the link from the browser to anybody in the world and they should be able to see the output of the model we created earlier.

Quite neat, right?

Follow me for more content like this.

Cheers!

--

--

Iva @ Tesla Institute
Artificialis

hands-on hacks, theoretical dig-ins, and real-world know-how guides. sharing my notes along the way; 📝