DraftKings Kubernetes Workshop: Hands-on Learning in K8s (with Video Walkthrough)

Dave Musicant
DraftKings Engineering
14 min readMar 30, 2021

This is part one of a three-part series getting hands-on with Kubernetes and Helm.

Why Kubernetes?

For DraftKings, Kubernetes offers three main benefits:

  • Better production behavior with less effort
  • A more efficient developer experience
  • Significant cost savings.

Kubernetes can help us improve the resilience, performance, and nimbleness of our micro-services. New containers can be started on available hardware in a matter of seconds compared to minutes on ec2 virtual machines. It has built-in rolling updates and blue-green deployment. Coupled with feature-rich health, liveness, readiness, and startup checks, it leads to a safer rollout without manual intervention.

K8s, and the third-party add-ons, have the potential to improve our developer efficiency. Because K8s clusters are essentially the same no matter where they’re run, developers can test services locally in the same configuration and manner as in the cloud, including deployment.

As a result of K8s’ architecture (e.g. abstracting away underlying ec2 instances that clusters run on, scaling capabilities, and intelligently placing containers on the right hardware) significant cost savings can be achieved. For some services, this could yield as much as a 40% reduction in underutilized hardware.

Workshop Overview

This is a workshop that we’ve run some of our engineers through to help demystify Kubernetes (K8s) and introduce some of the concepts.

In this session, you’ll learn how to:

  • Use secrets
  • Use namespaces
  • Deploy containers (a microservice and a database)
  • Use K8s load balancing
  • Connect a micro-service to a database with secrets
  • Hook up an Ingress to expose the service to requests from a domain URI

How To Use The Workshop

All the files you’ll create are viewable in their final state at https://github.com/dmusicant-dk/k8s-tutorial/tree/main/k8s-tutorial

Throughout this workshop, you’ll see a lot of new concepts introduced. Whenever they’re inside code, for example in a YAML file, we’ll include comments in the code file itself that explains the concepts. Having the documentation close to the code should make it clearer for you.

We also want to call out that for the purposes of this workshop, this is not a production-quality application. For example, you’ll deploy a database into K8s without stateful, persistent storage. That is beyond the scope of this exercise.

Below you’ll walk through the following steps:

  1. Setup (Pre-Requisites, Getting Familiar With kubectl, Clusters vs Contexts)
  2. Step 0: Create Our Namespaces
  3. Step 1: Create a Secret
  4. Step 2: Create the Database Deployment
  5. Step 3: Create the Database Service
  6. Step 4: Create An “ExternalName”
  7. Step 5: Create a REST Application
  8. Step 6: Create the Rest Service
  9. Step 7: Create the Ingress

For each, you’ll see a “Hands-On Steps” to indicate where your work begins.

Video Walkthrough

You can also follow along with all the steps in this article by watching the video below:

Pre-Requisites

You can run this exercise in any Kubernetes cluster on your laptop. We recommend using Minikube but you can also use Docker Desktop’s Kubernetes as well. We’ll call out all places where there are differences.

OS

We’ve run this workshop on both Windows and Mac laptops. Throughout, we’ll call out any places where there might be differences in commands.

Windows

If you’re on Windows, you should be using at least Windows 10 version 2004. Also, make sure you run all your commands in a command window as Administrator.

If Using Docker Desktop Kubernetes

For this tutorial, we recommend using Minikube so you won’t have to worry about this, but if you’re using the Docker Desktop Kubernetes, you must have Windows Version 2004, Build 19041 or greater so that WSL2 will work (which Docker Desktop uses underneath).

If Using Minikube

If you’re using Minikube, you need to make sure you have hyper-v installed. To enable it, or just check if it’s enabled, follow these steps:

Installing From The GUI

  1. Open “Windows Features”
Choose “Turn Windows features on or off”
Choose “Turn Windows features on or off”

2. Look for Hyper-V and enable it

Enable “Hyper-V”
Enable “Hyper-V”

3. You’ll then need to restart your computer.

Alternative: Installing From Powershell

On Windows, run the following code as Administrator:

Then restart.

Kubectl

This command-line tool is the main way to work with K8s clusters. You will use it to install, uninstall and query for objects in your K8s cluster. You can install this if your Docker Desktop didn’t already install it, or if that version is too old.

Docker

You must have docker installed (on Windows, install docker desktop). You’ll need docker installed even if you’re using Minikube.

Installing Minikube

This is a single-node Kubernetes cluster you can install locally. I highly recommend starting here instead of Docker’s K8s. If you completely bork your cluster, you can just uninstall and reinstall Minikube in one step from the command line.

I recommend you install chocolatey on Windows (for MacOS use brew) as it makes this even easier.

Windows

MacOS

You can also use Docker’s Kubernetes but follow the below steps to make Minikube work correctly.

Uninstalling (Optional)

There are a few places where Minikube does things a bit differently and you’ll need some Minikube-specific commands, but the workshop will call those out when relevant.

Docker Kubernetes

If you’re using Minikube for this workshop, skip this. If you chose to use Docker’s Kubernetes, you’ll need to enable Kubernetes following these steps.

1. Open Docker Settings / Preferences

Right click Docker Desktop and choose “settings”
Right-click Docker Desktop and choose “settings”

2. Check to enable Kubernetes

Go to the Kubernetes tab and click “Enable Kubernetes”
Go to the Kubernetes tab and click “Enable Kubernetes”

Getting Familiar With kubectl

We’ll be using kubectl from the command line as this is the main way you interact with a Kubernetes cluster.

Your kubectl is “driven” by the cluster config you installed at ~/.kube/config. When you installed Minikube, it put its configuration in that file and changed itself to be the default. As a result, if you run a kubectl command, it will do so against your Minikube cluster.

Some initial commands you can run:

Clusters vs Contexts

It’s important to note that you do not specify which cluster to run your kubectl commands against. Rather, you specify the context. A context is the combination of your current authentication identity combined with the cluster itself. Think of it as user@cluster.

You can see this by running kubectl config get-contexts

Using Clusters via Context

With our setup now complete, you’re ready to start the workshop!

Step 0: Create Our Namespaces

Reminder that all the code for this Workshop is located here: https://github.com/dmusicant-dk/k8s-tutorial/tree/main/k8s-tutorial

Namespaces are a way to organize your deployed objects in Kubernetes. It is not required that you create any, but it is highly recommended as not using a namespace will install everything in the default namespace. Namespaces are not a security feature — anything in any namespace can reach anything else in another namespace (except for some specific cases that we’ll explore later in the workshop).

We’re going to be using namespaces here so you can also get a better understanding of what they mean for your applications.

Hands-On Steps

1. Create the following YAML file as data-layer-namespace.yaml. NOTE: the names of YAMLs don’t really matter, kubectl doesn’t care, but it helps you keep track

2. Next, in a terminal (in the same directory as that YAML file) run the command:

Now repeat steps 1–2 but this time with app-layer instead of data-layer (both inside the YAML and in the name of the file).

You can now verify you created the namespaces by running:

Command Breakdown

To better understand what you’ve implemented, it can help to talk a bit about what YAMLs are truly doing for Kubernetes. K8s YAMLs are usually described as blueprints, but they’re actually a little bit different from that. They are serving two purposes:

  1. Telling K8s that you need it to create something
  2. Making a “contract” with K8s that it will make a best effort to keep its state in alignment with what you asked it to create and how you want those things to “live”

For example, you might “ask” K8s to create three instances (called replicas in K8s) of a micro-service, but to scale that up to four instances of the CPU, utilization goes above 70%. You do this by making a deployment YAML with those settings. If one of the instances becomes unhealthy and shuts down, K8s will see that its state — i.e. only two instances — doesn’t match the contract your YAML specified. It will then start up a new instance and ensure it’s healthy.

This is why the K8s command is kubectl apply and not kubectl create. We are telling K8s to apply a specific state to the cluster and then ensure it stays that way. You can see this if you look at the YAML as it lives in your cluster. K8s will add a special state property to it so it knows whether it’s fulfilling the contract or not.

Example Live YAML:

Step 1: Create a Secret

You’re going to be deploying a database, so you’ll need a way to inject the username and password. Secrets are a good way to do this, but it needs to be noted that secrets are not encrypted, they’re only base64 encoded. Most companies will use additional K8s Operators (kind of like plugins) to enable encrypting secrets.

As we mentioned earlier, only a few things cannot be accessed across namespace, and Secrets are one of them. As a result, you’ll need to create the same secrets in both the data-layer and app-layer.

Note: Secrets must be base64 encoded in your YAML

Hands-On Steps

1. Create your username and password (also base64 encoded).

On Windows

On Mac/Linux (or GitBash)

2. Create the following YAML file as db-credentials.yaml:

3. In a terminal, in the directory where your YAML file is, run the command to create the secret in the data layer:

4. Create the YAML file app-db-credentials.yaml and use the same contents as db-credentials.yaml but change the namespace to app-layer.

IMPORTANT: Do not yet apply this secret! Later, we’ll generate a specific error so we can see how secrets are not accessible across namespaces and do some debugging.

Step 2: Create the Database Deployment

New Concepts

There are a few new concepts in K8s that you’re going to be utilizing in this step which will make this step easier to follow.

In a non-K8s environment, we typically deploy microservices individually into either virtual machines running an operating system like Windows or into docker containers running an OS like Linux. In K8s, we also use containers but K8s actually has you specify what you want to do with Pods, not containers. K8s does not run containers directly; it runs Pods that manage containers.

  • Pods: The smallest deployable units of compute that can be run in K8s. They can run one or more containers which then share storage and network resources, as well as the same specification (or contract) for how to run the containers
  • Nodes: The physical or virtual servers on which the Pods are run

You do not actually create Pods. Instead, you create or specify a deployment that tells K8s the state you’d like to have and K8s then creates Pods based on that contract.

The Deployment

You’ll be using MySQL and the database credentials we created above (although we really only need the password as the application will use the default “root” user).

Hands-On Steps

1. Create the following YAML file db-deployment.yaml:

2. Go to the terminal in the directory where your YAML file is and apply it to K8s:

At this point, you will have a MySQL database running in K8s with the username root and password dbpassword1. However, it’s not yet easily reachable from within K8s. To do that, you’ll add a load balancer called a Service. Otherwise, you’d need to know the IP Address inside K8s for the pod — which can change if the pod restarts. It’s also not reachable from outside K8s, but this is on purpose since resources that do not need to be directly used by external clients should not be exposed outside the cluster.

Step 3: Create the Database Service

A Service is essentially the load balancer for the application’s containers, whether internally or externally. Without it, you’d need to know the address of the containers within K8s and hope they never change. Instead, you can give one static address for clients to use that can distribute all requests to the Pods (and containers) behind it.

There are a number of types of Service we can choose from:

  • ClusterIP: This is for internal cluster access only. If your service doesn’t need to be externally accessible, use this default type.
  • NodePort: For opening a specific port on the nodes externally and load balancing your containers behind it. Note: You can only use a port between 30000–32767 and you can only have one service per port.
  • LoadBalancer: The standard way to get a static IP exposed for incoming traffic to your application.
  • Ingress: Not actually a service per se. It’s basically like a router that lets you map specific domain names and paths to Services.

We could have defined the Service in the same YAML as the Deployment using the YAML document separator — -, but for this workshop, it’s easier to follow if they’re in separate files.

Hands-On Steps

1. Create the following YAML db-service.yaml:

2. Go to the terminal in the directory where your YAML file is and apply it to K8s:

At this point, other deployed services can reach the database from anywhere within the cluster as long as they know the namespace it is in. This limits the ability to move the database without impacting dependent services, so in the next step, we look at how to avoid this problem.

Step 4: Create An “ExternalName”

Since you’re going to be putting the database in another namespace from the app, we need a way to cleanly reach it. In K8s, while services can be accessed across namespaces, this has to be done by creating a sort-of proxy that maps the in-cluster DNS name of the resource (that you want to connect to) to a name in your namespace. The format of the URL you use for the other service is <service-name>.<namespace>.svc.cluster.local. An ExternalName simplifies this so we can make an alias like db-service and use that in our application.

Hands-On Steps

1. Create the following YAML as db-external-name.yaml:

2. Go to the terminal in the directory where your YAML file is and apply it to K8s:

Step 5: Create a REST Application

You’ll be constructing a simple C# service that will connect to the MySQL database we just created. It will run in a docker container and have the database connection credentials injected via environment variables from our database credentials secret.

Note: This is not a production-quality micro-service. Its only purpose is to show how to make an application that can be deployed in K8s, reach a resource in a different namespace, and be reached by a URL from outside K8s.

Download the code from here (some elided code shown below).

Code Snippets:

The full code should be pulled from the GitHub repo above.

REST Controller Snippet:

Dockerfile Snippet:

A Dockerfile is an “instruction set” for Docker on how you want a container image constructed. You’ll use one to tell it how to compile, build, package and install our C# app in the container.

Build The Docker Image

You’re now going to build the Docker image (which the container that will be deployed in K8s will be built from) and then put this into Minikube’s image cache so the cluster can find it.

Hands-On Steps

Download the code from here

1. In the directory where you are containing the Dockerfile and C# code, run the command:

Next, you need to put the image into Minikube’s local repository:

Deploy The REST Micro-Service

We’ll now look to use the Docker image we just created and deploy it into K8s.

Hands-On Steps

  1. Create a YAML file with the below contents named app-rest-deployment.yaml:

2. In the directory where you created that YAML file, run the command to install it in K8s:

NOTE: you will get an error (if you’ve been following all our instructions). This is what we want!

You should see something like

Notice that zero of the pods are ready and you have a “create” error.

If you remember from the beginning of this workshop, we have a command where we can figure out what’s going on: kubectl get events -n app-layer.

Hands-On Steps

Let’s run this:

Notice that last line. Remember we created our secret, but only deployed it to the data-layer namespace. As you can see, aSecret is not a cross-namespace resource. Let’s correct this and try again.

You should have created the secret earlier in Step 1 as app-db-credentials.yaml with app-layer as the namespace.

Hands-On Steps

You’ll do this with a few steps:

  1. Clean up our failed deployment
  2. Apply the secret to the app-layer namespace
  3. Re-apply the REST deployment

Step 6: Create the Rest Service

You’ll be opening access to the REST containers externally (see Step 7: Create the Ingress below) but we don’t want to tie that to the specific container or pod instances. If those restart, or we add more, it would be difficult to manage those and update our IPs. So you’ll add a Service in front of it to essentially act as a single-point load balancer with an internal DNS name.

Hands-On Steps

1. Create the YAML file named app-rest-service.yaml:

2. In the directory for that file, run the command:

Step 7: Create the Ingress

An Ingress is a way to allow external requests to the K8s cluster to reach Services inside the cluster. We can also direct these requests to specific Services by matching them to domains and paths.

Step 7.A: Enable Ingress Capabilities

K8s does not come with the underlying Ingress capabilities out of the box. You need to choose a provider and install it. For both Minikube and Docker K8s, we can use Nginx.

Hands-On Steps

In Minikube

Minikube needs some addons to do a few things such as running Ingress controllers. You can add it with the command:

minikube addons enable ingress

In Docker Desktop

Run the following command to install Nginx’s ingress in Docker Desktop:

Step 7.B: Create The Ingress

One thing to be mindful of, you might find the above command for installing the ingress in Docker K8s with a different version of that controller file. For example, ours is v0.41.2. Do not choose a different version. We chose that version so it would match with the version and YAML structure in the Ingress YAML.

Hands-On Steps

1. Create the below file named app-rest-ingress.yaml:

2. In the directory of the file, run the commands:

Make sure you don’t forget to grab the IP address that it shows from the command. You might have to wait a few minutes before it’s available.

Step 7.C: Add A “Domain” To Your Localhost

You’re going to use a fake domain draftkingsk8s.com.

Hands-On Steps

  1. Open your hosts file as an Administrator (On Windows, this is at: C:\Windows\System32\drivers\etc\hosts)
  2. Add the following line at the bottom of the hosts file (replace the IP with your Ingress IP!! If it had localhost as the address, you can use 127.0.0.1): xxx.xxx.xxx.xxx draftkingsk8s.com

For example, if your Ingress was at 172.29.141.97, you would add the line:

172.29.141.97 draftkingsk8s.com

You should now be able to reach your service in the browser: http://draftkingsk8s.com/ and see “Got: first value!”

Congratulations, you have a fully working application made from resources deployed into a Kubernetes cluster!

Next Up

In Part 2, we’ll continue our learning by altering the output of this workshop to learn about Helm and how it can simplify all the steps we just walked through as well as give us a powerful templating capability.

In Part 3, we’ll look at linting, schemas, testing, and health checks with Helm, Kubernetes, and a few other tools.

--

--