Rock-Solid K3s on OCI — part 3
This is part three in a series on using K3s in an OCI Always Free account. In case you missed the previous parts, you’ll really need to start at the beginning:
We’re now ready to setup the integration of our newly-minted K3s environment with OCI. This is an exciting facet, as it’ll allow for management of OCI Load Balancers, Block Storage Volumes, etc.
Add OCI Container Registry credentials
The OCI Container Registry (OCIR) will be used for storing container images. We’ll use private container registries for the containerized applications we’ll be building/deploying. This means that K3s will need OCI credentials in order to work with the private container registries.
kubectl create secret docker-registry ocirsecret --docker-server=<region>.ocir.io --docker-username='<namespace>/<username>' --docker-password='<OCI_auth_token>' --docker-email='<your_oci_email>'
Take a look at the OCIR documentation for the different region URLs to use for the region you’re working in.
Make sure to use the correct username format (<namespace>/<username> or if you’re using IDCS, <namespace>/oracleidentitycloudservice/<username>). The password will be an Auth Token associated with your account. Take a look at the OCI documentation for more info.
Setup the OCI Cloud Controller Manager
The OCI Cloud Controller Manager interacts with K8s (the K3s implementation in this case) and your OCI account, managing the load balancers (LBs) in OCI that might be needed.
Start by making a few directories:
sudo mkdir /etc/kubernetes
sudo mkdir /etc/oci
Place the following contents into /etc/oci/cloud-provider.yaml (this should be a newly created file at this point).
# Copied from https://raw.githubusercontent.com/oracle/oci-cloud-controller-manager/master/manifests/provider-config-instance-principals-example.yaml
useInstancePrincipals: true
compartment: <OCID of compartment>
vcn: <OCID of k3s VCN>
loadBalancer:
# subnet1 configures one of two subnets to which load balancers will be added.
subnet1: <OCID of LB Subnet>
securityListManagementMode: None
rateLimiter:
rateLimitQPSRead: 20.0
rateLimitBucketRead: 5
rateLimitQPSWrite: 20.0
rateLimitBucketWrite: 5
Example (using nano):
sudo nano /etc/oci/cloud-provider.yaml
Then paste the contents (above) with the values (OCIDs) of your environment.
Now create the secret in K3s:
kubectl create secret generic oci-cloud-controller-manager -n kube-system --from-file=cloud-provider.yaml=/etc/oci/cloud-provider.yaml
Install the OCI Cloud Controller Manager
Run the following on the server1 instance:
export RELEASE=$(curl -s "https://api.github.com/repos/oracle/oci-cloud-controller-manager/releases" | jq .[0].tag_name | tr -d "\"")
kubectl apply -f https://github.com/oracle/oci-cloud-controller-manager/releases/download/${RELEASE}/oci-cloud-controller-manager-rbac.yaml
curl -L https://github.com/oracle/oci-cloud-controller-manager/releases/download/${RELEASE}/oci-cloud-controller-manager.yaml | sed 's/ node-role.kubernetes.io\/control-plane: ""/ node-role.kubernetes.io\/control-plane: "true"/g' > oci-cloud-controller-manager.yaml
kubectl apply -f oci-cloud-controller-manager.yaml
Run the following to get the running pods:
kubectl -n kube-system get po
It should show something like the following:
Configure SELinux
You’ll likely see some SELinux errors at some point, similar to the following (seen in /var/log/messages):
Aug 26 05:45:54 server1 setroubleshoot[226499]: failed to retrieve rpm info for /sys/fs/cgroup
Aug 26 05:45:55 server1 setroubleshoot[226499]: SELinux is preventing ip6tables from ioctl access on the directory /sys/fs/cgroup.
This can be remedied by running the following:
sudo ausearch -c 'ip6tables' --raw | audit2allow -M my-ip6tables
sudo semodule -X 300 -i my-ip6tables.pp
Install nginx-ingress
We’ll setup an nginx ingress controller, which will allow us to route requests for different services through a single load balancer. We’ll follow the
Let’s start by downloading the latest nginx ingress manifest. Following the directions, run:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/$(curl -s "https://api.github.com/repos/kubernetes/ingress-nginx/releases" | jq '.[] | [select(.tag_name | startswith("controller-"))]' | jq -cs '.[0] | .[].tag_name' | tr -d "\"")/deploy/static/provider/cloud/deploy.yaml > ingress-nginx-test.yml
We need to modify the Service in the file we just created (ingress-nginx.yml), adding annotations needed for configuring the OCI LB that will be created:
<snip>
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.3.1
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
oci.oraclecloud.com/load-balancer-type: "lb"
service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "10"
service.beta.kubernetes.io/oci-load-balancer-subnet1: "<LB Subnet OCID >"
oci.oraclecloud.com/oci-network-security-groups: "<LB NSG OCID>"
service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: "None"
oci.oraclecloud.com/node-label-selector: "k3s.io/role=agent"
spec:
<snip>
Install it by applying:
kubectl apply -f ingress-nginx.yml
At this point you should be able to see the service:
kubectl get service -n ingress-nginx
You should see something like the following:
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.43.2.45 <none> 443/TCP 9h
ingress-nginx-controller LoadBalancer 10.43.142.214 <public_ip> 80:30173/TCP,443:30297/TCP 9h
$
The EXTERNAL-IP field on the ingress-nginx-controller is what you’ll use to access the different applications we’ll be deploying. This is the public IP address used by the OCI load balancer (LB).
Install OCI Container Storage Interface (CSI) (optional)
This is an optional step. If you’d like to be able to have the OCI Cloud Controller Manager manage OCI Block Volumes (which may be attached to a pod), this might be of interest.
The following steps are based on the CSI directions:
kubectl create secret generic oci-volume-provisioner -n kube-system --from-file=config.yaml=/etc/oci/cloud-provider.yaml
export RELEASE=export RELEASE=$(curl -s "https://api.github.com/repos/oracle/oci-cloud-controller-manager/releases" | jq .[0].tag_name | tr -d "\"")
kubectl apply -f https://github.com/oracle/oci-cloud-controller-manager/releases/download/${RELEASE}/oci-csi-node-rbac.yaml
curl -L https://github.com/oracle/oci-cloud-controller-manager/releases/download/${RELEASE}/oci-csi-controller-driver.yaml | sed 's/ node-role.kubernetes.io\/master: ""/ node-role.kubernetes.io\/master: "true"/g' > oci-csi-controller-driver.yaml
kubectl apply -f oci-csi-controller-driver.yaml
kubectl apply -f https://github.com/oracle/oci-cloud-controller-manager/releases/download/${RELEASE}/oci-csi-node-driver.yaml
kubectl apply -f https://raw.githubusercontent.com/oracle/oci-cloud-controller-manager/master/manifests/container-storage-interface/storage-class.yaml
After applying the above manifests, check to see if the pods are running (it might take a minute or two to complete):
kubectl -n kube-system get po | grep csi-oci-controller
kubectl -n kube-system get po | grep csi-oci-node
Wrapping It Up
Wow, we’ve accomplished a lot in this part in the series! We’ve setup the OCI Cloud Controller, as well as a nginx ingress controller. The next thing is to deploy a couple of apps to see this all in action. Let’s do that in the next part…
Meanwhile, chat with us on the Oracle Developer Slack channel!