Synopsis
Kubernetes, also known as K8s (because there are 8 letters between K and s), is an open-source system for automating deployment, scaling, and management of containerized applications.
Creating Kubernetes playgrounds can be costly for students and hobbyists who might not have access to organization-grade resources. Also, developers might want a playground K8s environment to test out their configurations or implement some proof-of-concepts. This article is targeted towards this audience.
In this article, I wanted to give a brief overview of how to create a lightweight Kubernetes environment (having 2 nodes) on an average laptop (Windows OS, Intel i5 processor, and 8 GB RAM). We will also deploy a demo application across both these nodes.
There are many tools available to create lightweight Kubernetes clusters such as K3s, Minikube, Kind, MicroK8s, K0s, etc. Each of these has its pros, cons, and use-cases. In this article, I am going to use K3s.
The purpose of this environment would be to understand the architecture, play around with configurations and commands while learning Kubernetes. This environment cannot be used to deploy full-fledged applications because of limited resources.
Prerequisites
- The first thing we need is a virtualization platform. In this article, I will be using VMware but any platform can be used. The most common ones are VMware and VirtualBox.
- Alpine Linux ISO. Alpine is a popular lightweight and stripped-down Linux distribution.
Configuration
Installing Alpine Linux
I won’t be covering the steps to install Alpine Linux in this article but it's fairly easy to do and only takes up to 20–30 mins. Follow the official installation guide, VMware OR VirtualBox. Make sure that you install Alpine Linux in System Disk Mode
so that our RAM usage is not very high.
I recommend allocating 2 GB RAM and 2 processors to the VM.
We need 2 VMs in our environment, one will act as the master node and the second one will act as a worker node. To create the 2nd VM, you can just clone the existing VM or set up a new one as per your need.
Installing k3s
k3s is open-source certified Kubernetes distribution built for resource-constrained environments. k3s is production-ready and fully conformant. Check out k3s GitHub.
Let’s proceed with the installation. First, we need a k3s server (Kubernetes cluster master).
Installing k3s on Alpine is fairly simple. Run the following command on the VM you wish to keep as cluster master.
curl -sfL https://get.k3s.io | sh -
This command will install k3s and start the k3s service. To verify the installation, run the following command.
rc-udpate
k3s should be listed in the list of services
That’s it for installing the k3s server, now the next step will be to set up the k3s agent (worker) node.
Before we proceed with the agent installation, we need 2 values.
- K3S_URL i.e. k3s server URL: Check your VM’s IP address using the
ifconfig
command. Your k3s server URL will behttps://<ip-address>:6443
- K3S_TOKEN aka K3S_CLUSTER_SECRET: This can be obtained by running
cat /var/lib/rancher/k3s/server/node-token
command on your cluster master.
Now let's login to our 2nd VM to install k3s in worker mode.
Run the following command,
curl -sfL https://get.k3s.io | K3S_URL=<your server url> K3S_TOKEN=<your server token> sh -
Once the k3s is installed and the service is running, you can verify the installation by running the rc-update
command on the worker node.
As we can see, here k3-agent
service is running which means k3s is running in the agent mode.
We can now run the kubectl get nodes
command on the cluster master.
Deploying a demo application
Now we are ready to deploy our application on the cluster we created.
You can clone this repo which contains the demo app we are going to deploy, https://github.com/lakhinsu/k3s-demo-app
Before we deploy this app, we need to build and publish our docker image on a docker repository. We will not cover this here but you can check out this Docker documentation.
Now that our image is published, you can replace your-image
placeholder in the k8s.deployment.yaml
file.
We need to create a deployment on our Kubernetes environment. Refer to the k8s.deployment.yaml
file.
Next, we need a Kubernetes service for our application. Refer to the k8.service.yaml
file.
Last but not the least, we need to create an Ingress object in Kubernetes to allow outside access to our application. Refer to the k8s.ingress.yaml
file.
To create these objects in our Kubernetes cluster, run the following commands on the Kubernetes master node.
kubectl apply -f k8s.deployment.yaml
kubectl apply -f k8s.service.yaml
kubectl apply -f k8s.ingress.yaml
Once our application is deployed, you can run this command to check our deployment object. kubectl describe deployments
We can also check status of the pods on which our app is running, use the command kubectl get pods -o wide
In the above screenshot, we can see that our pods are running on our 2 nodes, this is as expected because of the podAntiAffinity
we specified in the k8s.deployment.yaml
.
Accessing our application
Now that our application is deployed, let’s try and access it.
We can access our application at http://<k3s-server-url>:80/
using any REST client.
Wrapping up
Finally, now that we have successfully created a k3s cluster and deployed our application, let’s see what our resource usage looks like.
We can see that resource usage is very low for both our VMs. If you see heavy resource usage then make sure that the Alpine Linux is installed in the System Disk
mode.
References
Listing down the documentation and articles I have referred.
- https://wiki.alpinelinux.org/wiki/Installation#Installation_Overview
- https://wiki.alpinelinux.org/wiki/Install_Alpine_on_VMware_Workstation
- https://docs.docker.com/engine/reference/commandline/build/
- https://rancher.com/docs/k3s/latest/en/
Thanks and Regards