Rock-solid K3s on OCI — part 2
This is part two in the series for running K3s in an OCI Always Free tenancy. You should have the OCI infrastructure setup from part one. If you’ve not done so, take a look back at part one for an introduction and the list of OCI resources needed.
This part sets up K3s on the OCI Compute Instances that we created in part one. Let’s get busy installing K3s…
Install K3s Server — Disable Firewalld
NOTE: This is not a best-practice or recommended approach for a long-term installation (certainly not for a production environment!). This is based off of the recommendations given in the Rancher K3s documentation.
Run the following commands on the server to disable the host-based firewall (firewalld) running:
sudo systemctl stop firewalld
sudo systemctl disable firewalld
Install K3s Server — Create K3s configuration directories and files
There are several YAML files which will be used to configure K3s. Proceed by creating the /etc/rancher/k3s directory, where these files will be placed:
sudo mkdir -p /etc/rancher/k3s
The following will create a kubelet configuration file, which is needed:
sudo su -c 'OCID=$(curl -s -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/instance/ | jq .id | tr -d "\"")
printf "apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
providerID: $(echo $OCID)" > /etc/rancher/k3s/$(hostname)-kubelet.yaml'
The above command queries the OCI instance metadata that’s available to OCI instances, getting the instance OCID, which is stored inside of the configuration file. See the OCI documentation for more info on getting OCI instance metadata.
Create an additional configuration file:
sudo su -c 'printf "write-kubeconfig-mode: \"0644\"
kubelet-arg=config: \"/etc/rancher/k3s/$(hostname)-kubelet.yaml\"
kubelet-arg=cloud-provider: \"external\"
kube-controller-manager-arg=cloud-provider: \"external\"
" > /etc/rancher/k3s/config.yaml'
Install K3s Server — K3s installation
Run the following to install K3s Server:
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=$(curl -s "https://api.github.com/repos/k3s-io/k3s/tags" | jq .[0].name | tr -d "\"") sh -s - server --disable-cloud-controller --disable servicelb --disable traefik --kubelet-arg="cloud-provider=external" --kubelet-arg="provider-id=$(curl -s -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/instance/ | jq .id | tr -d "\"")"
We’re getting the latest K3s release, as well as getting the OCID from the instance metadata. Some of the default settings are being modified, such as the default cloud controller, Traefik, service LB, etc. This is because we’re going to integrate our K3s installation with OCI, using the OCI Cloud Controller (to be installed later).
Install K3s Server — Taint the K3s Server node
Tainting the node will be important for the OCI Cloud Controller Manager:
kubectl taint nodes server1 node-role.kubernetes.io/master=true:NoSchedule
At this point, you should have the K3s server setup. Let’s proceed by installing the K3s Agents.
Configure K3s Agents
We have two K3s agents to setup: agent1.k3s.k3s.oraclevcn.com and agent2.k3s.k3s.oraclevcn.com. While I’m only listing the instructions once, you’ll need to perform the same steps on both agent instances!
Configure K3s Agents — Disable Firewalld
NOTE: This is not a best-practice or recommended approach for a long-term installation (certainly not for a production environment!). This is based off of the recommendations given in the Rancher K3s documentation.
Run the following commands to disable the host-based firewall (firewalld):
sudo systemctl stop firewalld
sudo systemctl disable firewalld
Configure K3s Agents — Create K3s configuration directories and files
Get the token from the K3s Server, which will be needed for agents to register and communicate with the K3s Server. From an SSH session with the K3s Server, run:
sudo cat /var/lib/rancher/k3s/server/node-token
This will output the K3s token to the screen. Copy that to your clipboard, as it’ll be used shortly.
The K3s agents also use YAML files to configure K3s. We’ll build those now.
Create the /etc/rancher/k3s directory:
sudo mkdir -p /etc/rancher/k3s
The following will create a kubelet configuration file, which is needed:
sudo su -c 'OCID=$(curl -s -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/instance/ | jq .id | tr -d "\"")
printf "apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
providerID: $(echo $OCID)" > /etc/rancher/k3s/$(hostname)-kubelet.yaml'
The above command queries the OCI instance metadata that’s available to OCI instances, getting the instance OCID, which is stored inside of the configuration file. See the OCI documentation for more info on getting OCI instance metadata.
Create an additional configuration file:
sudo su -c 'printf "kubelet-arg=config: \"/etc/rancher/k3s/$(hostname)-kubelet.yaml\"
kubelet-arg=cloud-provider: \"external\"
kube-proxy-arg=healthz-bind-address: "0.0.0.0:10256"
node-label:
- \"k3s.io/role=agent\"
server: \"https://server1.k3s.k3s.oraclevcn.com:6443\"
selinux: true
" > /etc/rancher/k3s/config.yaml'
Configure K3s Agents — Install K3s Agent
Run the following to install K3s Agent:
curl -sfL https://get.k3s.io | K3S_URL=https://server1.k3s.k3s.oraclevcn.com:6443 K3S_TOKEN=<YOUR_K3S_TOKEN_HERE> INSTALL_K3S_VERSION=$(curl -s "https://api.github.com/repos/k3s-io/k3s/tags" | jq .[0].name | tr -d "\"") sh -
Make sure that you give it the K3s token that you copied from the K3s server!
Verify agent nodes
From on the K3s Server (SSH’d into the bastion, then into the K3s Server), run the following:
kubectl get nodes
You should see something like the following:
If you do not see the two K3s agents, review the agent installation steps on the missing instance(s).
Wrapping it up
We now have K3s running in our OCI Always Free account! Congratulations. But we’re not done yet… next we’ll be setting up the OCI Cloud Controller Manager, which will allow our K3s environment to manage things like OCI Load Balancers and Block Storage Volumes! It’s well worth our time and effort, so that’ll be our next article in this series.
If you’re curious about the goings-on of Oracle Developers in their natural habitat, come join us on our public Slack channel!