Zero to Kubernetes on the IBM Bluemix Container Service

Just the basics. Nothing fancy.

Mark Watson
Apr 18, 2017 · 12 min read

A few weeks ago I stumbled across this tweet:

I love containers, and I have been playing with Kubernetes for a while now, so I was excited to see IBM offering Kubernetes as part of the IBM Bluemix Container Service (Note: as of this writing this service is in beta).

After the InterConnect conference I got a chance to experiment with the Kubernetes offering, and I decided to deploy some of my favorite Bluemix apps to Kubernetes.

Before I get into all that, I want to talk a little about containers and Kubernetes.


For the purpose of this article I am going to assume that you have some understanding of containers. Most likely you have used, or are at least familiar with, Docker. If you are not familiar with Docker, or containers in general, I would recommend this post from our friends at freeCodeCamp.

Containers provide developers with a way to package up applications and their dependencies in a lightweight manner. They are an attractive deployment option for some developers as they provide consistency across deployment environments. When you package your application into a container you are guaranteed that the same code you ran in development will run in QA, staging, and production. This works best when you incorporate containers into your entire development cycle.


Kubernetes was originally developed by Google as part of the Borg project and open sourced in 2014. In March, 2016 Kubernetes became the first project hosted by the CNCF. From the Kubernetes website:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

In a nutshell, Kubernetes helps you run containers in production. You can deploy your containers to Kubernetes, and Kubernetes will do things like:

  1. Ensure your containers are always running.
  2. Handle network routing between your containers.
  3. Auto-scale your containers across multiple nodes.

Kubernetes also supports Ingress and Load Balancers, Stateful Sets for running databases in containers, Persistent Volumes, and more. Some of these features are provided by the underlying cloud platform and not Kubernetes itself. For example, load balancing is currently not built-in to Kubernetes, but rather implemented by the cloud platform that is running Kubernetes.

For this example I’m not going to worry about networking, auto-scaling, or any of these other features. I’m going to simply deploy a single pod (I’ll tell you what a pod is later) and a single secret.

Kubernetes/Containers vs. PaaS

There are many ways to deploy containers to production. Many cloud providers have proprietary container services. Kubernetes, on the other hand, is an open source standard which ensures your Kubernetes deployment will run on any cloud platform that supports Kubernetes. You can even run Kubernetes in a multi-cloud infrastructure.

Kubernetes is a great option for running containers in production, but are containers right for you?

PaaS offerings like Cloud Foundry and Bluemix make it super simple for developers to deploy applications to the cloud. If you are running a popular software stack like Node.js or Python, there are very good PaaS offerings available to you as a developer. PaaS offerings are often more mature and have more built-in features like user management, integrated CI/CD, metrics, and logging.

Sometimes, however, these offerings are not enough. Maybe you want to run a software stack not supported by your PaaS provider, or maybe you want to run a custom version of your software stack. In these cases containers and Kubernetes might be a better option.

See this Stack Overflow post for a good discussion on the differences between Kubernetes and Cloud Foundry.

Let’s run Kubernetes on Bluemix!

If this is your first time working with Kubernetes, this isn’t the right place to start. I would recommend starting with the Kubernetes tutorials. I also recommend installing and running Minikube, which is a local, single-node Kubernetes cluster that runs inside a VM.

IBM Bluemix Container Service logo. I can hardly contain myself!


If this is your first time working with Kubernetes—and you ignored my earlier warning about this not being the best place to start—then you’ll need to download the Kubernetes CLI. After you have the Kubernetes CLI installed, you’ll need to set up the Bluemix CLI.

At this point you should have the kubectl and bx commands available from your command line interface. The next step is to configure the container service plugin for the Bluemix CLI. Log in using the bx command:

bx login -a

Run the following command:

bx plugin install container-service -r Bluemix

Next, run this command to initialize the container service plugin:

bx cs init

Creating a cluster

You’ll start by creating a free-tier cluster in the Bluemix Container Service:

bx cs cluster-create --name my-cluster

This will create a cluster with a single worker node with 2 vCPU and 4 GB memory. List your clusters by running the following command:

bx cs clusters

Next, you’ll need to get your remote cluster config to use a local kubectl context from your command line. It may take a few minutes for your cluster to be created. Run the following command every few minutes until the cluster is ready:

bx cs cluster-config my-cluster

When your cluster is up and running, the result of this command should look something like this:

Downloading cluster config for my-cluster
The configuration for my-cluster was downloaded successfully. Export environment variables to start using Kubernetes.
export KUBECONFIG=/Users/markwatson/.bluemix/plugins/container-service/clusters/my-cluster/kube-config-prod-dal10-my-cluster.yml

Copy the export command and run it in a new command line window.

Note: You will have to run this command every time you open a new command line interface. Alternatively you can configure a new context in your local kube config, typically found at ~/.kube/config. For example, I created a context called “bx-context,” so if I run the command kubectl config use-context bx-context, kubectl will be configured to access my Bluemix cluster globally. See ~/.kube/config, and your cluster YML file located at KUBECONFIG (see snippet above) for more info.

Kubernetes dashboard

You can access your Bluemix Kubernetes dashboard locally by running:

kubectl proxy

After running this command open in your web browser:

If your dashboard looks similar to the screenshot above, then congrats! You are officially running Kubernetes in the IBM Bluemix Container Service. Next, you’ll get your first pod up and running.

Hello, Kubernetes

At this point you should have a Kubernetes cluster running in the IBM Bluemix Container Service, but it’s not actually running any containers yet. You’ll soon change that, but first a quick discussion on Kubernetes Pods.

A pod, in this context, is a group of one or more containers. In this article, you’re going to set up your pods with a single container. You can deploy a pod (or container) to Kubernetes using kubectl and passing it a YAML file that describes the pod:

apiVersion: v1
kind: Pod
app: my-app
- name: nginx
image: nginx:latest

This YAML file tells Kubernetes to deploy a pod with the name my-nginx-pod, which uses the nginx:latest container. By default it will download the latest container image from Docker, but you could also use your private Bluemix container registry.

Let’s deploy it! Copy the YAML above into a file called my-nginx-pod.yaml and run the following command:

kubectl create -f my-nginx-pod.yaml

To see what pods are running in Kubernetes, click the Pods link in the left nav of the dashboard or run the following command:

kubectl get pods

If the pod has started you should see something like this:

my-nginx-pod 1/1 Running 0 7s

An nginx instance is now running inside Kubernetes, but you can’t access it yet. You’ll need to expose it to the outside world. There are a number of ways to expose ports on your pods. The first and easiest one is by using the expose command:

kubectl expose pods my-nginx-pod --type=NodePort --port=80 --name=my-nginx-pod-svc

This will create a Kubernetes service, which will act as a proxy to port 80 in our nginx container. Kubernetes will allocate a port for the service in the range of 30000–32767. To get the port assigned by Kubernetes, run:

kubectl get services

The output should look similar to the following:

NAME               CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes <none> 443/TCP 4h
my-nginx-pod-svc <nodes> 80:32398/TCP 9

Here you can see the service that Kubernetes created and that the service is exposed on port 32398. (Ignore the cluster-ip value. That’s the internal IP address of the service inside the cluster.) You’ll need to get the IP address of the node within the cluster to access the service. So, get a list of nodes:

kubectl get nodes

The output should look something like this:

NAME             STATUS    AGE Ready 4h

You can now access your nginx service. In this case, it’s available at

You can use services to expose a port on a single pod like above, or load-balance a group of pods. You can also define a service in a YAML file and deploy it to Kubernetes. The YAML file for this service would look something like this:

apiVersion: v1
kind: Service
name: my-nginx-pod-svc
- protocol: TCP
port: 80

type: NodePort

You now have an nginx instance running inside Kubernetes that you can access from outside the cluster, but this just barely scratches the surface of Kubernetes. I’d like to point out a few other Kubernetes concepts that are extremely important, but outside the scope of this article:

1. ReplicaSets

You can and should use ReplicaSets to ensure your pods are always running. I typically use Deployments, which essentially allow you to define your pods and replica sets in one file along with your desired state. For example, the following deployment tells Kubernetes to ensure three nginx pods are always running:

apiVersion: apps/v1beta1
kind: Deployment
- name: nginx
image: nginx:latest
- containerPort: 80

2. LoadBalancers or Ingress controllers

You would typically configure your service with type LoadBalancer. This will use the underlying cloud provider’s load balancing capabilities, allow you to configure the port (or use standard http/https ports), and in most cases allow you to configure SSL. Alternatively, you can create an Ingress to accomplish the same task. I used a NodePort because the free tier of Kubernetes on Bluemix does not support load balancer or ingress options.

Deploying the Recipe Chatbot

Now that you’ve configured your Kubernetes cluster on Bluemix and I’ve introduced you to some simple Kubernetes concepts, you’re ready to deploy a real application. For this exercise you’ll deploy my Recipe Chatbot.

The Recipe Chatbot is a simple Slackbot for finding recipes based on ingredients or cuisines. It uses Watson Conversation to manage the chat, Spoonacular for looking up recipes and instructions, and IBM Watson Data Platform services like Cloudant or IBM Graph to store user behavior. You can read more about it:

The Recipe Chatbot uses hosted database services and hosted APIs. This is typical of many applications. Even though you are running your applications inside containers, you will still need to access external services. In this case, each of these external services has its own set of credentials. I’ll show you below how to configure them in Kubernetes.

There is quite a bit of setup required to get the Recipe Chatbot up and running, so I recommend you follow the instructions at the GitHub repo for the Node.js/Cloudant version at

Build the container

Before you can create a pod, you’ll need a container.

Tip: you can skip these steps and go straight to the Deploy to IBM Bluemix Container Service section if you don’t want to create the container yourself. I have made the container publicly available from my Docker Hub account.

The Recipe Chatbot includes a Dockerfile called bot.Dockerfile in the docker folder. Here are the contents of the Dockerfile:

FROM node:latest
MAINTAINER Mark Watson <>
RUN mkdir -p /usr/src/bot
COPY package.json /usr/src/bot/package.json
COPY index.js /usr/src/bot/index.js
COPY CloudantRecipeStore.js /usr/src/bot/CloudantRecipeStore.js
COPY RecipeClient.js /usr/src/bot/RecipeClient.js
COPY SousChef.js /usr/src/bot/SousChef.js
WORKDIR /usr/src/bot
RUN npm install
CMD ["node","index.js"]

The code uses the latest node (node:latest), copies only the files needed to run the bot and nothing more, and runs npm install when it builds the container. When the container is started it will run node index.js.

Note: Do not include your .env file in your container. It will contain your Cloudant credentials, Watson Conversation credentials, etc. If you do include your .env file in your container, do not share it publicly.

To build the container, cd into the docker folder:

cd /dev/github/ibm-cds-labs/watson-recipe-bot-nodejs-cloudant/docker

Run the following docker command to build the container (replace markwatsonatx with your own Docker username):

docker build -t markwatsonatx/watson-recipe-bot-nodejs-cloudant:latest -f ./bot.Dockerfile ../

Upload to Docker Hub:

docker push markwatsonatx/watson-recipe-bot-nodejs-cloudant:latest

Deploy to IBM Bluemix Container Service

Now that you’ve created your Docker container and uploaded it to Docker Hub, you’re ready to deploy it to Kubernetes. The Recipe Chatbot requires the following environment variables to run properly:


You could include these variables in a .env file in your container, but as I mentioned I highly discourage this approach. Instead, you can use Kubernetes Secrets.

You deploy secrets to Kubernetes just like you deploy pods. Start by creating a YAML file for your secret:

apiVersion: v1
kind: Secret
slackBotId: VTXXX
spoonacularKey: dnXXX
conversationUsername: ZTXXX
conversationPassword: N2XXX
conversationWorkspaceId: YmXXX
cloudantUrl: aHXXX
cloudantDbName: d2XXX

Obviously I did not share the actual values here, but there is one important note: these values must be base-64 encoded. For example, if your slackBotToken is 1234 then the value in your secret file should be MTIzNA==.

Save the YAML above to a file called bot-secrets.yaml. Base-64 encode each value in your .env file (created when you set up your Recipe Chatbot) and copy it into your secrets.

Time to deploy your secret to Kubernetes. Run the following command:

kubectl create -f bot-secrets.yaml

Now you’re ready to take the Recipe Bot container and deploy it as a pod. Create a file called bot.yaml with the following contents (replace markwatsonatx with your Docker username if you deployed it yourself):

apiVersion: v1
kind: Pod
- name: watson-recipe-bot
image: markwatsonatx/watson-recipe-bot-nodejs-cloudant:latest
key: slackBotToken
- name: SLACK_BOT_ID
key: slackBotId
key: spoonacularKey
key: conversationUsername
key: conversationPassword
key: conversationWorkspaceId
key: cloudantUrl
key: cloudantDbName

You’ll notice above that the configurations map the variables in your bot-secrets to environment variables in your pod. The Node.js application doesn’t care how the environment variables are set, as long as they are there when it starts.

Time to deploy the pod! Run the following command and cross your fingers:

kubectl create -f bot.yaml

If all goes well, your Recipe Chatbot should be up and running in a few seconds, and you should be able to chat with your Slackbot. As you can see below I can access my favorite recipes from Cloudant:

If anything goes wrong with your chatbot, you can simply kubectl delete pod bot-pod and then recreate it using kubectl create -f bot.yaml.

What just happened?

If you followed the exercises in this post, then you did at least some of the following:

  • Created a Kubernetes cluster in the IBM Bluemix Container Service.
  • Deployed an nginx container as a Kuberntes pod and made it publicly accessible.
  • Built a container and pushed it to Docker Hub.
  • Deployed the Recipe Chatbot to Kubernetes via a single secret and a single pod.

If you are currently deploying containers to production; interested in deploying containers to production; or curious about moving your application, solution, or service to containers, then Kubernetes could be a great fit.

If this post helped you get something up and running in Kubernetes on Bluemix — or if you leave here with just a little bit better understanding of Kubernetes, why it’s important, or where it might fit in your deployment strategy — then it was all worth it. Congratulations!

If you enjoyed this article, please ♡ it to recommend it to other Medium readers.


Things we made with data at IBM’s Center for Open Source Data and AI Technologies.

Thanks to Mike Broberg

Mark Watson

Written by

Tech Lead, HotSchedules


Things we made with data at IBM’s Center for Open Source Data and AI Technologies.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade