Time to “Hello, World”: running Node.js on VMs, Containers, Apps, and Functions — Part 3

Building with Docker containers

Dmytro (Dima) Melnyk
Node.js Collection
Published in
6 min readMay 25, 2018


This article is a part of the series: Part 1 (overview), Part 2 (VMs), Part 3 (containers), Part 4 (apps) and Part 5 (functions & summary).

“Hello, World” on Kubernetes Engine

Alright, containers… Bring ’em on! (For context, my knowledge and understanding of Kubernetes was limited to Children’s Illustrated Guide to Kubernetes when I first got here.)

Kubernetes Engine provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. So let’s turn our simple “Hello World” into a replicated application running on Google’s hosted version of Kubernetes.

Plan of attack:

  1. Code a Node.js app (reuse the existing one)
  2. Create a Docker container image
  3. Setup GKE: clusters, pods
  4. Expose the app to external traffic
  5. Assess scaling on GKE
“Hello, World” on Kubernetes Engine

1) Put the application into a Docker container image

While standing up the VM version of “Hello World”, I learned about Cloud Shell — a command line inside my browser. (Behind the scenes, it’s a VM that runs Linux and is equipped with goodies like Google Cloud SDK and Node.js.) Very cool. Let’s use it here to save time.

Google Cloud Shell
$ npm install express$ vim hello.js
$ node hello.jsExample app listening on port 8080!

Preview your app using Cloud Shell’s built-in feature:

Technically, this “hello” is from Cloud Shell, not from GKE, yet.

Let’s go ahead and put it into a Docker container. The following “recipe” will start from the node image found on the Docker hub, expose port 8080, copy our hello.js file to the image and start the node server as we previously did manually:

$ vim DockerfileFROM node:8.9.4RUN npm install expressEXPOSE 8080COPY hello.js .CMD node hello.js

Build the Docker image:

$ docker build -t melnykdima/helloworld-k8s .

Thanks to the -t flag, my image is tagged, which makes it easy to look up:

$ docker images

Run the image:

$ docker run -d -p 8080:8080 melnykdima/helloworld-k8s

Check the port mapping and test the app:

$ docker ps

Now, this “hello” is from the Dockerized application:

$ curl -i localhost:8080...Hello World from GKE!

2) Push to Google Container Registry

$ gcloud docker -- push melnykdima/helloworld-k8s
The push refers to a repository [docker.io/melnykdima/helloworld-k8s]
denied: requested access to the resource is denied

Google the error… according to this Stackoverflow thread, I messed up tagging… ouch, I should have read the manual… re-tagging and re-building to push to gcr.io:

$ docker build -t gcr.io/mtthw/helloworld-k8s .$ docker images
$ gcloud docker -- push gcr.io/mtthw/helloworld-k8sdenied: Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=mtthw before performing this operation.

Hmm… Going to the link doesn’t help:

Let’s Google the erroraha! Project name “mtthw” vs. project ID “helloworld-mtthw”:

Re-deploying with the correct project ID:

$ gcloud docker -- push gcr.io/helloworld-mtthw/helloworld-k8s

And let’s double check under Tools > Container Registry:

OK, my container made it to the registry safely. Now it’s time to figure out how to run it on GKE.

3) Figure out container clusters & pods

Let’s take a quick look at some Kubernetes core concepts:

  • A container cluster is the foundation of Kubernetes Engine. Containerized applications all run on top of container clusters that consist of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.
  • Kubernetes is primarily targeted at applications composed of multiple containers. It therefore groups containers using pods and labels into formations for easy management and discovery. A pod is a group of one or more containers with shared storage/network and a specification for how to run them.

Let’s go ahead and create a cluster:

  • Name: helloworld-k8s-cluster-1
  • Machine type: small
  • All other settings: default

Now I got my own Kubernetes cluster powered by GKE… Great!

Time to deploy my containerized “Hello World” application to the cluster:

$ kubectl run helloworld-k8s-cluster-1 --image=gcr.io/helloworld-mtthw/helloworld-k8s --port=8080error: failed to discover supported resources: Get http://localhost:8080/api: dial tcp getsockopt: connection refused

Let’s Google the error… aha! I forgot to connect to my cluster (click Connect button in the screenshot above):

$ gcloud container clusters get-credentials helloworld-k8s-cluster-1 --zone us-central1-a --project helloworld-mtthw

Here’s the second attempt to deploy:

$ kubectl run helloworld-k8s-cluster-1  --image=gcr.io/helloworld-mtthw/helloworld-k8s  --port=8080deployment "helloworld-k8s-cluster-1" created

Woohoo! My container is up and running under the control of GKE:

4) Expose the app to external traffic

Now let’s expose the app to the outside world:


$ kubectl expose deployment helloworld-k8s-cluster-1 --type="LoadBalancer"service "helloworld-k8s-cluster-1" exposed

The --type flag specifies that we’ll be using the Compute Engine load balancer. In addition to balancing traffic across all pods, GKE creates the appropriate forwarding and firewall rules to make the service is fully accessible from outside of Google Cloud Platform.

Let’s double check the external IP:

$ kubectl get services

And… test!

$ curl World from GKE!

5) What’s involved in scaling container clusters

Since scaling was beyond the scope of my “Hello World” exercise, I limited it to a quick review of the top three to five articles that came up in Google Search: “google kubernetes engine gke scaling container clusters”… Looks like kubectl scale is the way to go. Let’s test drive it:

$ kubectl scale deployment helloworld-k8s-cluster-1 --replicas=4deployment "helloworld-k8s-cluster-1" scaled

Kubernetes Engine also comes with the cluster autoscaler feature that automatically resizes clusters based on the demands of the workloads you want to run. (With autoscaling enabled, GKE automatically adds a new node to your cluster if you’ve created new pods that don’t have enough capacity to run; conversely, if a node in your cluster is underutilized and its pods can be run on other nodes, the node gets deleted.)

6) Time check & getting started resources

Image source

Even at the “Kubernetes for Dummies” level, containerized application deployment, scaling and management come with a steep learning curve. Since I didn’t want to blindly follow one of the GKE codelabs, a lot of my time went into grasping the fundamentals, reading documentation and various blogs. I obviously only scratched the surface, but this simple exercise gave me a taste of deploying containerized applications on GKE and also helped appreciate the power — and beauty — of Kubernetes.

Useful getting started resources:

Continue on to:

  • Part 4: building on top of PaaS (App Engine)
  • Part 5: building with FaaS (Cloud Functions) & summary