Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Kubernetes is one of the most used container orchestrator if not the most used. Since K8s is open-source there are many ways to create a kubernetes cluster you can use GKE (Google Cloud), minikube, EKS (Amazon) and many more. One of those many offerings is K3S.
K3S is an open-source project backed by the CNCF. K3S is lightweight and super easy to install since its meant for edge and IoT devices.
In this post we’ll create a Kubernetes cluster using K3S in just a few commands.
First, we need an environment to spin up our cluster. For this example let’s use a digital ocean droplet, but you can use any linux environment (as long as it is supported by K3S).
Creating a droplet is really easy. In the dashboard, simply click in the create button and then droplets.
I’ll select a Ubuntu 20.04 LTS with 2 CPU, 2GB of RAM and 60GB of SSD, which is about 15USD/month, you don’t have to go for these specs. Minimum requirements for K3S are:
- RAM: 512MB Minimum (we recommend at least 1GB)
- CPU: 1 Minimum
Once the droplet has finished setting up we can connect to it using the web console or using an ssh client.
K3S provides with an installation script to easily setup the cluster. There are other options as well, and you can find them all here.
The simplest command is the following:
curl -sfL https://get.k3s.io | sh -
This script will download and install the latest stable version (at the moment of the writing is v1.20.0+k3s2). Personally, I like to specify the version so no surprises come along the way, and you’ll end up with a script as follows
INSTALL_K3S_VERSION=v1.19.5+k3s1curl -sfL https://get.k3s.io | sh -
That you can execute using a bash script
vim install-k3s.sh# copy the previous script
# save the filebash install-k3s.sh
If everything finished correctly, K3S will be installed along with the
kubectl command tool, so you don’t need to install it separately. You can then execute
kubectl get nodes
And should see something like this
K3S is running as a service and will be started once the machine is
systemctl status k3s.service
Now the cluster is up and running and you can begin deploying your applications.