Setting Up a Kubernetes Learning Environment

Eric Gregory
Mirantis
Published in
4 min readJun 7, 2022

One of the biggest challenges for implementing cloud native technologies is learning the fundamentals — especially when you need to fit your learning into a busy schedule.

In this series, we’ll break down core cloud native concepts, challenges, and best practices into short, manageable exercises and explainers, so you can learn five minutes at a time. These lessons assume a basic familiarity with the Linux command line and a Unix-like operating system — beyond that, you don’t need any special preparation to get started.

In the last lesson, we learned how Kubernetes orchestrates containers, who runs Kubernetes, and why. Today, we’ll set up a cluster on our local machine so we can get some hands-on experience.

Table of Contents

  1. What is a Container?
  2. Creating, Observing, and Deleting Containers
  3. Build Image from Dockerfile
  4. Using an Image Registry
  5. Volumes and Persistent Storage
  6. Container Networking and Opening Container Ports
  7. Container Networking and Opening Container Ports
  8. Running a Containerized App
  9. Multi-Container Apps on User-Defined Networks
  10. Docker Compose and Next Steps
  11. What is Kubernetes?
  12. Setting Up a Kubernetes Learning Environment ←You are here

Choices when Accessing Clusters

As we’ll see in the next few lessons, a Kubernetes cluster can run in a wide variety of configurations according to its use case. For learning purposes, we’re going to start with a single-node cluster on our local machine.

In the “real world,” clusters will typically spread across multiple nodes (or machines), each of which will have dedicated roles like managing the cluster or running workloads. In this case, everything will happen on a single node. That will give us a relatively straightforward development environment in which we can figure out the fundamentals, before progressing to more complex configurations.

The first decision we need to make is which version of Kubernetes to install. Because Kubernetes is an open source system, it has given rise to a variety of alternative distributions with their own particular use-cases, just as the Linux kernel is the foundation of numerous Linux distributions. The k0s (pronounced “kay-zeros”) project, for example, is a distribution designed for maximum resource efficiency, so it might run (and scale) anywhere from Raspberry Pis to powerful servers in public or private clouds. The creators of Kubernetes distributions usually try to maintain full compatibility with “upstream” Kubernetes — the original, baseline project — so that users can utilize the full suite of open source tooling developed by the community.

To get started, we’re going to use a distribution called Minikube, which is designed specifically for learning Kubernetes.

Installing Minikube

Minikube requires about 2GB of RAM to run, and it wants 20GB of disk space, which is the amount it assigns for cluster usage by default. (You can configure it to use as little as 2GB if you’re short on space.) It works on macOS, Linux, or Windows, and depending on its configuration, it runs on a container, a virtual machine, a combination of the two, or bare metal. Most people will be running it through VMs, containers, or both. If you have Docker Desktop or Docker Engine installed (as in our previous unit), then you’re ready to get started.

On an x86–64 machine running macOS with the Homebrew Package Manager, installation is a simple terminal command:

% brew install minikube

Users with other CPU architectures and operating systems will want to consult the official installation instructions to download the version that is right for them.

Once Minikube is installed, make sure Docker is running, and then run the following command in the terminal:

% minikube start

You should see some friendly, emoji-illustrated status updates confirming that the system is running:

👍  Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
▪ kubelet.housekeeping-interval=5m
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
▪ Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image kubernetesui/dashboard:v2.3.1
▪ Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
▪ Using image kubernetesui/metrics-scraper:v1.0.7
🌟 Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

We can confirm that Kubernetes is running with another command:

% minikube kubectl get nodes

This will list the nodes associated with our Kubernetes cluster. We should get a result that looks something like this:

NAME      STATUS   ROLES                  AGE     VERSION
minikube Ready control-plane,master 5m26s v1.23.1

All right — we have one node, and that makes sense, because we said that this would be a single-node cluster. But what’s kubectl, and why am I using it?

For the answer, we need to break down an important element of Kubernetes development: cluster access.

Read the rest of this tutorial on the Mirantis blog.

--

--