Kubernetes: Day One

Jonathan Campos
Google Cloud - Community
9 min readMay 18, 2018
You Can Do This In A Lot Less Than 72 Hours

This is the obligatory step one Kubernetes post. If you’re interested in Kubernetes you’ve probably read 100 of these articles already and you may or may not have set up your own cluster by this point. The purpose of this article is to lay the very basic ground work for a lot more advanced posts coming up. As many getting started articles as there are, there are a precious few that really dig into the true power behind Kubernetes.

Without further ado, let’s get to the basics.

High Level: What is Kubernetes?

Kubernetes is a container orchestration and management tool.

Generic Kubernetes Image

If you’ve been reading a lot of articles out on the web this is most likely the answer you’ve seen or something like it. This is a more complete answer as to the nature of Kubernetes if you are interested.

Reading other articles, you would have seen how you can use a handful of command line commands or YAML files to tell Kubernetes how to create Pods with Containers. Then you would use Services to expose these Pods to the outside world with a Load Balancer to route traffic to these pods that are created and destroyed at the whim of the all powerful Kubelet.

I am simplifying this a lot to make this more digestible so we can get right to it.

Get Started With Google Cloud Platform

Use Google Cloud Platform And A Trial Account

For this demo code I recommend using Google Cloud Platform and their managed Kubernetes product: Google Kubernetes Engine. You can create a trial account (which does require that you enter in a credit card though it won’t get charged till the trial is over) with $300 worth of credit to play with. WAY more than enough for this.

What Am I Providing For You?

I’m Providing You Some Docker Containers With NodeJS Code

Assuming you’ve read a bunch of getting started articles about Kubernetes, you already know that Kubernetes handles how the containers are created and managed but doesn’t care what the the containers actually do (again, simplifying). So for this example I’m just going to provide you with some containers that already do what we need them to. You can see all of the code that I provide at my Github Repo.

Where To Begin

We are going to start in the Cloud Shell Console within a Project. This is an easy place to run commands for your Google Cloud Platform Project Instance. For ease of use I am going to assume that you are starting with a 100% clean project that has nothing special set up.

Get To Coding

With your brand new clean Google Cloud Platform Project ready it is time for us to actual get some real work done. Now we need to create a Kubernetes cluster, any necessary provisioning, and build our docker container to be launched within our cluster. Easy.

First we need to start up a Cloud shell session by hitting the console icon in the top right hand corner of the Cloud console.

Top Right — Click It

Now you should have the Cloud shell window opened right in your browser. Feel free to take a moment and marvel how easy things have been. With that one click you launched a Docker instance and did all the necessary provisioning to interact with your Cloud project.

Yeah! Cloud Shell

Now you need to get my code, create that cluster, and build that container. Ready to have your mind blown? Go to your Cloud shell and type in the following.

$ git clone https://github.com/jonbcampos/kubernetes-series.git
$ cd kubernetes-series/partone/scripts
$ sh startup.sh

We are going to let that run for a bit. What we’ve done in these three commands is to clone the project’s code that I’ve created for you, go to the scripts folder and then run the startup script.

I highly recommend you spend sometime looking at the code for startup.sh. In this file I run many gcloud methods that ultimately setup everything for you. You could do all this yourself either by hand or through the UI, but again, we are trying to get to more advanced topics quickly. Seriously though, read the startup.sh file, I even have comments on my code.

Where Are We Now?

We have the container built, our Cloud environment setup, and even our cluster set. Now we need to just release a Pod, a ReplicaSet, and a Service and our “Hello World for Kubernetes” will be complete. FYI, this is finally time that we are really getting into “Kubernetes scripting”. Till now you’ve just been in setup.

ReplicaSet + Pods = Deployments

A ReplicaSet, an improved version of a Replication Controller from an earlier iteration of Kubernetes, work with the Kubernetes Master to manage Pods. If a Pod is removed/terminated/crashes/etc then the ReplicaSet will kill the Pod and recreate a new Pod to ensure high levels of consistency for your application.

A Pod is a container for containers. Most Pods have a 1-to-1 relationship between a Pod and a container, this isn’t a requirement though. If a container needs to share resources with other containers then you can include multiple containers in a single Pod.

A Deployment brings together ReplicaSets and Pods into a single process to (get this!) deploy your application. Seriously they have some straight forward names — I like it.

The following YAML file (included in Github) defines our Deployment. We will release this with a Service shortly.

apiVersion: apps/v1beta1
kind: Deployment # it is a deployment
metadata:
name:
endpoints # name of the Deployment
labels:
# any Pods with matching labels are included in this Deployment
app: kubernetes-series
tier: endpoints
spec:
# ReplicaSet specification
replicas: 3 # we are making 3 Pods
selector:
matchLabels:
# ReplicaSet labels will match Pods with the following labels
tier: endpoints
template:
# Pod template
metadata:
labels:
# Pod's labels
app: kubernetes-series
tier: endpoints
spec:
# Pod specification
containers:
# the container(s) in this Pod
- name: partone-container
image: gcr.io/PROJECT_NAME/partone-container:latest
# environment variables for the Pod
env:
- name: GCLOUD_PROJECT
value: PROJECT_NAME
# we are going to use this later
# for now it creates a custom endpoint
# for this pod
- name: POD_ENDPOINT
value: endpoint
- name: NODE_ENV
value: production
ports:
- containerPort: 80

Service

With a Deployment we have our Pods replicating properly within the cluster but we have yet to create a way for the world to reach our cluster/Pods. With a Service, we create and open the networking interfaces necessary to interact with our Pods.

The following YAML file (included in Github) defines our Service. We will release this with our Deployment next.

apiVersion: v1
kind: Service # a way for the outside world to reach the Pods
metadata:
# any Pods with matching labels are included in this Service
name: endpoints
spec:
# Service ports
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
# It includes a LoadBalancer between Pods
type: LoadBalancer
selector:
app:
kubernetes-series

Launch

With our service.yaml ready and our deployment.yaml put together it is high time we deploy everything so we can see what is going on. I’ve set up the following script for you to run that will deploy everything. Then we just need to wait for the external IP Address to be available to our Load Balancer. Getting the external IP Address can take a minute or two, just run the scripts and it will do the waiting and watching for you.

$ cd kubernetes-series/partone/scripts # if necessary
$ sh deploy.sh
$ sh check-endpoint.sh

Now all we have to do is wait. Wait for the external IP Address to be available… oh it’s done!

Waiting for that sweet sweet external IP Address

With the resulting IP Address you can hit your amazing Hello World Kubernetes application.

I want to remind you that this is only the beginning though. There is so much more that we can and will do based on this. Now that we have a baseline to work off of we can really deal with more code and get into the fun stuff.

Teardown

Before you leave make sure to cleanup your project so you aren’t charged for the VMs that you’re using to run your cluster. Return to the Cloud Shell and run the teardown script to cleanup your project. This will delete your cluster and the containers that we’ve built.

$ cd kubernetes-series/partone/scripts # if necessary
$ sh teardown.sh

Bonus!

Okay, okay. I did include one little premium feature in our Kubernetes cluster. Did you catch it in the startup script?

gcloud container clusters create ${CLUSTER_NAME} --preemptible --zone ${INSTANCE_ZONE} --scopes cloud-platform --num-nodes 3

To help save us all some money I used preemptible VMs rather than just “normal” VMs. What are preemptible VMs? Preemptible VMs are VMs that live at most 24 hours and Google can turn off when they need extra compute power. In trade Google cuts you an 80% discount. Because we are building amazing applications that are expected to have VMs spun up and down rapidly this shouldn’t scare us at all.

And because you are using Kubernetes any VM that is shut down is immediately recreated. Win!

Conclusion To Part One

There is more to this series. I’ve been nothing but impressed by Kubernetes and feel like every time I look deeper Kubernetes rewards me with a new feature to make my life easier. I intent to share those findings and little things that I felt were interesting along the way. I can’t promise that all the posts will be long, most will likely be really short, but all should be helpful.

What features are you interested in with Kubernetes?
hat features do you find aren’t talked about enough?

Other Posts In This Series

Jonathan Campos is an avid developer and fan of learning new things. I believe that we should always keep learning and growing and failing. I am always a supporter of the development community and always willing to help. So if you have questions or comments on this story please ad them below. Connect with me on LinkedIn or Twitter and mention this story.

--

--

Jonathan Campos
Google Cloud - Community

Excited developer and lover of pizza. CTO at Alto. Google Developer Expert.