Windows desktop applications are my comfort zone. They’re like a pair of well-worn slippers that fit nicely and make me feel happy. Although your comfort zone can be useful, it’s not always helpful if you want to try learning new things. I tried learning Node.Js once and tried to write code for it in TypeScript, which was also new to me. Big mistake. I’d taken on too much and retreated to what I knew. You don’t want to be too far in or too far out of your comfort zone. There’s an ideal zone where you try taking on something new, but not so much that it becomes overwhelming. A Goldilocks zone, if you like.
I started becoming interested in containers. I’d used virtual machines before but these lightweight containers seemed slightly magical and started to capture my interest. When you read about containers and work with people who are using them daily, it seems inevitable that you will eventually hear about Kubernetes. I’ll let the project website try and describe what Kubernetes is.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
As a non-expert, it sounded like somewhere you could run your containers, but where some of the stuff you didn’t really want to do yourself was handled for you. Incidentally, remember the K8s shorthand. It pops up quite a bit and I didn’t know it meant Kubernetes at first.
I was a little bit intimidated by Kubernetes. It seemed like something that was becoming more and more relevant, yet it seemed well outside my area of expertise. I suspect I was worried that I had dismissed it as a bit of a fad and it had since proved it was here to stay. It was time to start taking it seriously.
There’s some great training material out there for Kubernetes. Katacoda has some Kubernetes courses with sandboxed instances for you to play around in. I learned a lot from doing those, but at the same time, I felt a bit like I was learning by rote. There are several different learning styles — I like to take a hands-on approach when I’m learning and I decided I needed a project. I would create a distributed application running inside containers on Kubernetes. I would take small steps and play to my strengths by using things I already know, where possible.
I didn’t have a good idea. Then I recalled a conversation I’d had at an internal event inside Redgate, where I’d joked that you could do FizzBuzz in a distributed way with a Fizz service and a Buzz service. It wasn’t an amazing idea, but it was something I could work with. If you don’t know FizzBuzz, check out the Tom Scott video above or the Wikipedia page.
So, idea found! Where do I start? Let’s get stuck in!
I’ll go for a naive approach as this code doesn’t have to be production-ready. It’s going to be a learning experience. I will undoubtedly get things wrong, but that’s part of the process of learning. I will inevitably get to the end and wish I’d done things differently, but that’s OK. If I’ve reached that point then I’ve increased my knowledge.
My FizzBuzz application is going to have three REST APIs
- A Fizz service — to tell us when we should be Fizzing
- A Buzz service — to tell us when we should be Buzzing
- A FizzBuzz service — to query the other two APIs and determine what the final output should be. This will be exposed to the end-user.
You don’t even need to do anything with containers to get this working! You could have them on different ports on the same machine. As part of my “play to your strengths” policy, I’m going to create each of these in C# as an ASP.NET Web API in .NET Core, something I’m very familiar with. I’m not going to go into too much detail about the application code itself, but I’ll make all the source code available later on.
Let’s look at the output from my Fizz service when I input 3.
Excellent, it says I need to output “Fizz”. I’ve also included the hostname in the JSON output for my Fizz and Buzz services. I did some tests with various inputs to all three of the services and everything seems to be working correctly.
So, the APIs are created, what next? Well, they’re going to need to run in containers, so let’s do that. I’m familiar with Docker so I’ll create Docker containers that encapsulate my new services. Have a look at the docker.com article “Dockerize an ASP.NET Core application” to see how it can be done.
I’ve now got each service in its own Docker image, but they are parts of a system that haven’t been connected up yet. They are LEGO bricks in a bucket, unaware of their ultimate potential. We need to stick those bricks together to create something more impressive. I’m vaguely aware of Docker Compose, and as I know Docker it seems like a way to bridge the gap between the individual containers and having this running in Kubernetes.
It’s very easy to describe a distributed application using a
docker-compose.yml file. Let’s have a look at one I’ve written for our application.
My FizzBuzz application is entirely described here. I’ve said that the Fizz and Buzz services should use the
robclenshaw/fizzbuzz-worker image, although I’ve set some environment variables to describe how they should behave. The FizzBuzz service uses a different image and I’ve exposed port 80 in the container to be accessible on port 7000 locally. Running
docker-compose up in the directory that contains our YAML file will launch the application.
We’ve got a FizzBuzz application running inside Docker Compose! Let’s check the output for an input of 15.
Looks good to me! The hex strings are the Fizz and Buzz hostnames.
I’m done, right? I have a distributed FizzBuzz application running in containers, do I actually need Kubernetes? Well, I’m far from an expert in Kubernetes, but I do understand the code I’ve just written and what I know is that I haven’t catered for several scenarios. If one of those containers goes down, I’m in trouble. It ought not to, but unexpected things happen. I don’t want to get a phone call in the middle of the night because a container fell over. Also, what happens if my application proves popular and gets a vast number of requests? Am I going to need multiple instances of my application? Will I need some load balancing? All these problems are things I don’t really want to have to think about — if there’s some help out there then I want that help! Kubernetes, among other things, can assist. So let’s get to the final piece of the jigsaw and get this onto Kubernetes.
I’ll use Minikube, which is a single node Kubernetes cluster that is easy to get started with. I’ve already used it with the Katacoda tutorials and we use it internally at Redgate for development purposes. I’ve also discovered a tool called Kompose. The website says…
“A conversion tool to go from Docker Compose to Kubernetes”
Great! It seems quite straightforward to use. Kompose has taken my Docker Compose file and converted it to YAML files that Kubernetes can understand. Using
kubectl to apply our new YAML files to the Minikube cluster, we now have a working FizzBuzz application inside Kubernetes!
The output looks fairly similar to the output from when we were using Docker Compose. It’s not that surprising really — it’s the same application but in a different environment.
We’re really done now, right? But there’s one last question — why did I bother putting the container hostnames in the output? That info seems a bit redundant! Well, I wanted to investigate how Kubernetes can help with resiliency and I needed the hostnames for that.
I want to simulate an unreliable service and it’s difficult to be unreliable when your service is only calculating a single remainder. I can simulate unreliability by adding an extra API endpoint to the fizzbuzz-worker image to serve as a liveness probe for Kubernetes. I’ll tell it to return a successful status code until a request is made to the main endpoint, when it will start returning status code 500 (internal server error). I’ve also added 10 replicas to the Fizz and Buzz services. Let’s refresh the browser and see what output we get now.
Notice that the Fizz and Buzz hosts are different? Kubernetes has queried the liveness of the previous containers and determined that they are no longer live. It has pointed the Fizz and Buzz services at other replicas and has simultaneously restarted the previous containers. I can keep hitting refresh and getting different hostnames in the output. I have provided Kubernetes with a fundamentally unreliable set of services, and Kubernetes is mitigating that.
After I got to the end of this exercise, I rewrote the APIs using Go. I had been experimenting with Go and rewriting the APIs helped increase my Go knowledge. It also had the benefit of massively reducing the container image size. As Go is statically linked, it allows you to utilize the scratch base container image which massively reduces the size. DockerHub tells me that each container image is now about 3 MB compressed.
I also rewrote the Kubernetes YAML files by hand. This probably wasn’t necessary, but I wanted to do it and it helped my understanding.
So, what are my takeaway points from this exercise?
- Taking small steps really works. It helps prevent becoming overwhelmed with new information.
- Don’t be afraid to write bad code when you’re learning. I’m a bit of a perfectionist and one of the first things that the world of work taught me was knowing what is “good enough”. It’s tempting to try and make things perfect from the get-go. But that would be daft. I wrote FizzBuzz and it’s not ever going to be in production! If you get to the end of a learning project like this and realize you’ve written something badly, that’s a really good sign because it means you’ve learned something.
- Kubernetes is not as scary as I had thought.
- There are always new tools, new frameworks, etc. If you choose to learn about something new, great! But it’s OK not to know them all. Since I’ve started working with Kubernetes, I’ve discovered even more new tools and frameworks that I don’t know about. I don’t think I could keep on top of each one.
- If you ask someone to write FizzBuzz for a coding interview and they mention Kubernetes, they may be overthinking the problem.