How to run kubernetes cluster with k3s
Kubernetes is a popular container orchestration platform, but setting up and managing kubernetes cluster can be a complex and time-consuming process. Fortunately, there are lightweight alternatives to the full-fledged kubernetes distribution, such as k3s, that provide a simpler and more streamlined installation and operation experience. In this article, we will guide you through the process of setting up kubernetes cluster with k3s.
What is k3s and why to use it?
K3s is a lightweight kubernetes distribution that is designed to be easy to install and run on resource-constrained environments, such as edge devices and single-board computers. It is fully compliant with kubernetes API, but it uses a smaller resource footprint and a simplified architecture to make it easier to deploy and operate.
There are several reasons why one might consider using k3s for running kubernetes cluster:
- Lightweight: k3s is a lightweight and optimized version of kubernetes, designed for resource-constrained environments such as edge computing devices, IoT devices, and low-power ARM devices.
- Easy to install: k3s can be installed with a single command, making it easy to set up and use.
- Fast startup time: k3s has a fast startup time, which makes it ideal for development and testing environments.
- Reduced resource usage: k3s is designed to use fewer system resources compared to traditional kubernetes installations, which makes it more suitable for running on low-end hardware.
- Secure: k3s includes several security features by default, such as TLS encryption, RBAC, and container isolation, making it a more secure option for running kubernetes clusters.
K3s is a great option for developers and teams who want to run kubernetes on resource-constrained devices or environments and want a lightweight and easy-to-use kubernetes distribution with strong security features.
Prerequisites
Before we begin, you will need to have the following:
- A Linux-based machine to run the k3s cluster. This can be a physical machine or a virtual machine.
- A user account with sudo privileges on the machine.
- Docker installed on the machine.
Step 1: Install k3s
The first step is to install k3s on the machine. This can be done by running the following command:
curl -sfL https://get.k3s.io | sh -
This command will download and install k3s on the machine. Once the installation is complete, k3s will be automatically started as a systemd service.
Step 2: Access kubernetes API
k3s provides access to kubernetes API through a local endpoint that can be accessed using kubectl
, kubernetes command-line tool. To access the API, you will need to retrieve the k3s cluster configuration file, which contains the necessary credentials and endpoint information.
The configuration file is located at /etc/rancher/k3s/k3s.yaml
on the machine. Copy this file to your local machine and save it as ~/.kube/config
and don’t forget to change file ownership.
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown user.user ~/.kube/config
Once you have the configuration file, you can use kubectl to interact with kubernetes API. For example, you can check the status of kubernetes nodes by running:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
host Ready control-plane,master 26m v1.26.3+k3s1
Now your k3s cluster is ready to run workloads, but if you want to add more nodes follow step.
Step 3 (Optional): Join new node to the k3s cluster
You will need to install k3s on the new node. You can do this by running the command from Step 1.
Now you need retrieve the k3s server token from the existing node by running the following command:
sudo cat /var/lib/rancher/k3s/server/node-token
This will output a token string that you will use later to join the new node to the cluster.
To join the new node to the k3s cluster, you will need to run the following command on the new node, using the server token that you retrieved on from previous command:
sudo k3s agent --server https://<ip_or_hostname_of_existing_node>:6443 --token <server_token>
Replace <ip_or_hostname_of_existing_node>
with the IP address or hostname of the existing node, and <server_token>
with the server token that you retrieved before.
To verify that the new node has joined the k3s cluster, you can run again:
kubectl get nodes
This should list all of the nodes in the k3s cluster, including the new node that you just added.
Now you can run your workloads on your k3s cluster and operate it with kubectl
, helm
and all your lovely tools.
Conclusion
K3s is a lightweight, easy-to-use, and secure kubernetes distribution that is ideal for deploying kubernetes clusters in resource-constrained environments. It provides all the necessary components to run a production-ready kubernetes cluster with a significantly smaller footprint and simpler installation process compared to traditional kubernetes distributions. K3s is suitable for developers, small businesses, and organizations with limited resources looking to run kubernetes clusters for development, testing, or production environments. With k3s, you can run kubernetes cluster with minimal resources and effort while still enjoying all the benefits of kubernetes, making it an excellent choice for anyone looking to deploy and manage containerized applications.