Travel light: K3S+MySQL on OCI

Ali Mukadam
Oracle Developers
Published in
4 min readOct 27, 2021

In a previous post, I took K3S for a spin on OCI and we created a basic, single-node server/agent which uses SQLite as its data store. One of the neat things with K3S is that it gives you a number of options for the data store e.g.

  • MySQL
  • An embedded or external etcd
  • PostgreSQL

In this post, we’ll attempt to use MySQL as data store. Since MySQL is available as a service on OCI, we will use that too. We will also add agent nodes. Below is the infrastructure the K3S cluster along with subnets, bastion and MySQL. We will also need 2 NSGs (for server and agent) as well as a security list (for the MDS subnet). You can find the list of ports to open for the security rules in the K3S documentation.

Setting up the server

We will create only 1 for now. Create a compute instance, add it to the servre subnet and ensure you’ve also set the server NSG.

Once provisioned, login to the server host and install MySQL Shell:

sudo dnf install -y mysql-shell

Obtain the MDS endpoint from the OCI console and connect to it using MySQL Shell:

mysqlsh <admin_user>@<private_endpoint>

In the screenshot above, my MDS endpoint is at 10.0.0.40. Set the K3S_DATASTORE_ENDPOINT variable:

K3S_DATASTORE_ENDPOINT='mysql://admin:V3ry_Secure!@tcp(10.0.0.40:3306)/k3s'

If you’re able to connect, you’re good to go; otherwise check your security lists and NSGs. You could also have used netcat to check your connectivity:

nc -v 10.0.0.40 3306

It’s useful to have the mysql-shell on the server though especially if you face permission issues.

You can now install K3S:

curl -sfL https://get.k3s.io | sh -s — server \
--datastore-endpoint=$K3S_DATASTORE_ENDPOINT

Once K3S is setup, change the permission on the k3s.yaml file as before and verify you can see one node:

$ sudo chmod go+r /etc/rancher/k3s/k3s.yaml$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
server Ready control-plane,master 34m v1.21.5+k3s2

Obtain the K3S token which you will need to register the agents with the server:

$ sudo cat /var/lib/rancher/k3s/server/node-token
K104214ab68054dadeb1616eda4c1b824de120a9f297dee24a08014cbcf89802220::server:38123df6165f4573e47def57e601aeea

Lastly, configure firewalld to allow traffic on port 6443:

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --reload

Setting up the agents

Next, we’ll create the agents. We’ll create only 1 for now. Create a compute instance, add it to the agent subnet and ensure you’ve also set the agent NSG.

Once the agent node is provisioned, ssh to it using a second terminal. Verify you can connect to the API server (mine is assigned 10.0.0.18) using netcat:

$ nc -v 10.0.0.18 6443
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: Connected to 10.0.0.18:6443.

You can now proceed with the agent installation. Export the following environment variables first:

K3S_URL=https://10.0.0.18:6443
K3S_TOKEN=K104214ab68054dadeb1616eda4c1b824de120a9f297dee24a08014cbcf89802220::server:38123df6165f4573e47def57e601aeea

Note that since we are installing the agent, the token is required. Using K3S_URL during installation will configure the node to become an agent. Neat! Let’s do it:

curl -sfL https://get.k3s.io | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -

Now when you check the nodes on the server, you can see the worker node:

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
server Ready control-plane,master 34m v1.21.5+k3s2
a1 Ready <none> 47s v1.21.5+k3s2

Here, we used a single compute instance to create the agent node and you can repeat this step. But you can also create the agents with an instance pool instead.

First, create an instance configuration. Ensure you add the following in cloud init:

#cloud-config
runcmd:
— curl -sfL https://get.k3s.io | K3S_URL=https://10.0.0.18:6443 K3S_TOKEN=K104214ab68054dadeb1616eda4c1b824de120a9f297dee24a08014cbcf89802220::server:38123df6165f4573e47def57e601aeea sh -

Replace the token value and url above with your values. You can now create the instance pool.

Once the agents created by the instance pool are ready, check whether they are successfully registered:

kubectl get nodes

Conclusion

This post was to explore the feasibility of running K3S with OCI MDS. In a future post, we will explore more K3S features and integration with OCI.

References:

--

--