Next.js tutorial — Deploy to Docker on Google Cloud Container Engine

This is a brief walkthrough on how to deploy an app built with Next.js from ZEIT to Docker running on the Google Cloud Platform Container Engine. I’m performing these steps from a Macbook using the terminal.

Note — this is just for development, for production there are many more best practices and these instructions would be slightly different.

A good place to start for more info on deploying a node.js app to GCP CE is this tutorial.

  1. Create an account on GCP and enable billing. Then install the cli tools on your laptop. Instructions here. You will need the gcloud cli and kubectl package. You need Docker installed also.
  2. Authenticate your gcloud cli, replace PROJECT_ID with your own.
gcloud auth login
gcloud config set project PROJECT_ID
gcloud config set compute/zone us-central1-b

3. Pick an example repo from the next.js/examples, I’m using the with-apollo here. Run the below commands then open a browser to http://localhost:3000 to make sure everything works.

curl | tar -xz --strip=2 next.js-master/examples/with-apollo
cd with-apollo
npm install
npm run dev

4. Create 3 new files in the root project directory

Create 3 files in the project root

.gitignore contents —


.dockerignore contents —


Dockerfile contents —

FROM node:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
RUN npm run build
CMD [ "npm", "start" ]

5. To make sure the app works, you can build and run the docker container locally on your laptop. We will refer to this docker app as nextapollo. The :v1 in the command below is how you set the version, increment this as you make changes to the app.

docker build -t${PROJECT_ID}/nextapollo:v1 .

6. After building the app, start it locally using docker, then visit http://localhost:3000 in your browser to test.

docker run --rm -p 3000:3000${PROJECT_ID}/nextapollo:v1
Hint — Check out Captain to easily start/stop local Docker apps from the macOS toolbar.
Captain, a free app to easily start/stop docker apps on your Mac.

7. Now you are ready to setup Google Cloud. First, push the docker image we built to your Google Cloud Container private registry. Every version you push will be kept in your registry and is available to deploy.

gcloud docker -- push${PROJECT_ID}/nextapollo:v1

8. You will need to setup a container cluster. This will be running Google managed Kubernetes. You can use the GCP Console, but the cli is easy.

## This will create a new cluster named 'next-cluster', with 3 Compute instances.
gcloud container clusters create next-cluster --num-nodes=3

## Wait and verify they were created
gcloud container clusters list

9. After running the above list command, you will notice an IP address, but that’s not what you will use for the app. Once the cluster is ready, we will create a pod which is a single instance of the Next.js app.

## This will create 1 pod using port 3000 within docker. 
kubectl run nextapollo${PROJECT_ID}/nextapollo:v1 --port 3000

## Wait a moment, then verify
kubectl get pods

10. You will need to expose the app to the outside world using a Google Cloud Load balancer. It takes a moment to generate the public ip address.

kubectl expose deployment nextapollo --type=LoadBalancer --port 3000
## Wait a moment, then verify
kubectl get services 
## refresh this until you see the EXTERNAL-IP address
Note — The above creates a simple TCP load balancer and exposes the 3000 dev port so you can quickly access the app. You’ll typically want to configure an Ingress https load balancer, see here, and probably want to use nginx as well.

11. The app will now be running a single pod in a 3 instance cluster. Visit http://External_IP:3000/ in your browser to verify it loaded ok.

12. You can scale the app easily, let’s scale to 6 pods (6 replicas) and they will be load balanced automatically.

kubectl scale deployment nextapollo  --replicas=6
## Verify it worked
kubectl get pods 
## You should see 6 copies running. 

That is the end of the basic configuration. You have Next.js with Apollo GraphQL running on Google Cloud Platform. It’s being automatically load balanced across 6 replicas of Node.js, arranged over 3 different Google Compute Instances. It will automatically handle rolling container updates as you push new versions of the app. Here are a few tips to update the app and manage the deployment.

  1. Make any change to the with-apollo local source code. Then save your changes and do these commands
## Notice the v2 increment at the end
docker build -t${PROJECT_ID}/nextapollo:v2 .

## Optional, if you want to test it locally before deploying
docker run --rm -p 3000:3000${PROJECT_ID}/nextapollo:v2

## Push the new version to your private registry at GCP
gcloud docker -- push${PROJECT_ID}/nextapollo:v2

## Tell Kubernetes to use the new version, this will start the rolling update of your containers (one line)
kubectl set image deployment/nextapollo${PROJECT_ID}/nextapollo:v2

## Check your browser to see the updates

## Optional, revert back to v1 if any issue, by running set again with :v1
kubectl set image deployment/nextapollo${PROJECT_ID}/nextapollo:v1

2. To explore the configuration in more detail, there are a few UI/Dashboard tools available.

The Kubernetes UI

## In a new terminal, start the proxy
kubectl proxy
## This will launch a server, visit http://localhost:8001/ui to see the Kubernetes Management Console. Note the Nodes, Deployments, Pods, and Replica Sets menus. Need to refresh the pages usually to see new chart data.

The Google Cloud Platform Console — Container Engine & Compute Instance UI & the StackDriver Monitoring UI if you’ve enabled that feature on the Container Cluster.

3. If you want to battle test the app and quickly generate random traffic to see the load balancing in action, or light up the cpu/memory charts in the UI/Console, you can use a free tool called Artillery to do this.

## install artillery globally on your system
npm install -g artillery
## This is an example command (one line) which will run for 200 seconds, creating 5 virtual users every second, that will send 20 GET requests each.
artillery quick --duration 200 --rate 5 -n 20 http://your_ip:3000

4. After running the above test, go to Kubernetes UI (kubectl proxy) and go to workloads/replica sets menu, find the nextapollo-…. with 6/6 pods. That will display all of the cpu/mem usage in one place, refresh during the test. It takes a minute or so for data to show up in the proxy. StackDriver Logging is more realtime, but I like the direct UI sometimes.

There is a lot more to consider to actually deploy into production— Continuous integration, SSL Certs, container/pod resource settings, proper load balancer & proxy settings, but this will hopefully get you started. It’s running Alpine Linux, which is a smaller footprint OS that’s good for containers.

Why not use now for deployments?

You can use ▲ZEIT now for getting up and running much more quickly. I’ve noticed I get better performance and have more control when running in Google Cloud. It’s not really a fair comparison, since it’s dedicated/larger resources and more $.

Here are two sample artillery benchmark reports running on now vs. GCP CE for the with-apollo example. They look similar, but if you notice the max/median/p95 latency were much lower for GCP. Also the max user counts were lower for GCP since it could process the requests faster. The main difference was initial latency, where GCP was consistently hitting < 30ms, and now was ~ 60–70ms. This is not a scientific benchmark, just a quick comparison. The test ran from Denver, so the latency baseline was in GCP’s favor since the instances were in Iowa. I’m assuming the now instances were in AWS Virginia or another coast. I also had gzip disabled on the GCP tests. The GCP latency was more consistent though.

GCP Config: 2x n1-standard-1 instances, Iowa, Container Engine —6x pods over https. ~$50/month

Now Config: Premium plan, scaled to 7 instances over https. ~$15/month