Rock-Solid K3s on OCI — Part 4

Tim Clegg
Oracle Developers
Published in
8 min readDec 13, 2022
Photo of a mountain top
Photo by eberhard grossgasteiger: https://www.pexels.com/photo/low-angle-shot-of-rock-formation-1366909/

This is the fourth and final (at least for now) article in this series. It’s been a fun journey, with the first part focusing on setting up the OCI resources needed. Part two found us going through the K3s installation, then in part three we installed the OCI Cloud Controller Manager and nginx ingress controller.

Let’s put this newly-minted environment to use, deploying a few applications in our K3s environment. We’ll deploy two applications:

Setup TLS

Start by creating a self-signed certificate that we’ll use when accessing the applications. The next few commands are taken directly from the OCI documentation (see the section called Creating the TLS Secret):

# Run in: K3s Server SSH session

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt

Even though this is not something you’d want to do in a “real” (production) environment, this will allow us to at least encrypt the traffic to/from our applications.

Jupyter

Jupyter is a data science platform that’s often used for AI/ML work. I’ve already written an article that describes how to launch Jupyter on OCI using your web browser. Follow the steps in the article to create the Jupyter container and push the container to OCI Container Registry (OCIR). Make sure to get the full path to the container in OCIR… you’ll need this in a minute! To get the full path, click on the repository (it’ll expand to show the tags). Click on your tag (if you’ve followed the directions I gave, it’ll be v1.0.0-ol8). The details will be shown on the right side of the screen), with the Full path being one of the attributes.

From a system that has access to the K3s API (I’ve opted to SSH into the K3s server), copy and paste the following into a file (I’ve opted to call it jupyter.yml):

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter
namespace: default
labels:
app: jupyter
spec:
replicas: 1
selector:
matchLabels:
app: jupyter
template:
metadata:
labels:
app: jupyter
ver: v1.0.0
spec:
containers:
- name: jupyter
image: <region>.ocir.io/<namespace>/jupyter:v1.0.0-ol8
imagePullPolicy: Always
ports:
- containerPort: 8888
protocol: TCP
command:
- jupyter
- lab
- --allow-root
- --notebook-dir
- /jupyter
- --no-browser
- --autoreload
- --ip=0.0.0.0
- --NotebookApp.base_url=/jupyter
imagePullSecrets:
- name: ocirsecret
nodeSelector:
kubernetes.io/arch: arm64
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-svc
spec:
selector:
app: jupyter
ports:
- port: 8888
targetPort: 8888
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-ing
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/app-root: /jupyter
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /jupyter
pathType: Prefix
backend:
service:
name: jupyter-svc
port:
number: 8888

NOTE: Notice how we used a service name of jupyter-svc rather than jupyter, like the deployment (and many other app references)? There are several environment variables that are injected into the Jupyter pod, some of which might be used by Jupyter for setting configuration (like JUPYTER_PORT, which is set). By using jupyter-svc, the variable JUPYTERLAB_PORT is set by K8s, which Jupyter does not look at (by default).

Go ahead and apply it:

# Run in: K3s Server SSH session

$ kubectl apply -f jupyter.yml

If you look at the sample manifest in the previous article I wrote on running Jupyter, there are a few differences. Both this and the article manifest declare a Deployment and a Service, however the Service type is different… instead of a type of LoadBalancer, we’re using a type of ClusterIP. Because we’re using an ingress, we also define an Ingress for Jupyter. The Ingress uses TLS (with the self-signed cert we previously configured) and associates the /jupyter path with the Service we’ve defined.

Make sure that the Jupyter pod is running, by executing `kubectl get pods`. You should see something like the following:

After making sure that the Jupyter pod is running, there are a few more commands we can look at. Look at the following commands (and output):

# Run in: K3s Server SSH session

$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
jupyter 1/1 1 1 77d

$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
jupyter-ing <none> * <pub_ip> 80, 443 74d

The deployment is running and accessible via the ingress (the public IP address shown in the command output).

The Ingress will show the OCI Load Balancer (LB) IP address. There’s another way to see the LB IP address:

# Run in: K3s Server SSH session

$ kubectl get services --no-headers -l app.kubernetes.io/component=controller -n ingress-nginx -o custom-columns=:status.loadBalancer.ingress[0].ip | grep -v '<none>'
<pub_ip>

Open a web browser to: https:<pub_ip>/jupyter

You’ll be taken to the Jupyter login screen. Your browser (if it’s doing its job) will alert you about the untrusted (self-signed) certificate being used. After assuring your browser to proceed, you’ll see the familiar Jupyter login screen. Go ahead and enter your password. Success! Let’s add Node-RED to the mix.

Node-RED

It’s now time for us to setup our second application: Node-RED. Node-RED allows for “low-code, event-driven programming” (copied almost verbatim from their website). In essence, you can configure actions that will run when a given trigger is activated. It’s particularly handy for stitching together different APIs (systems). This is a great platform for experimenting with IoT systems. Let’s get right into setting it up.

Because we’re only using A1 (arm) computes for the K3s agent nodes (those that are running pods), we don’t need to build amd64 (aka x86_64) variants of our containers. We only need arm64 (aka aarch64) versions of the Node-RED container. To expedite things, I’d suggest that you SSH to your bastion, then SSH to one of the agents and build the container there:

$ ssh -A <bastion_pub_ip>
$ ssh agent1.k3s.k3s.oraclevcn.com

Create a new directory for us to work in (try to keep things tidy) and enter it:

# Run in: K3s Agent1 SSH session

$ mkdir node-red
$ cd node-red

Place the following contents into a new file called Dockerfile:

FROM container-registry.oracle.com/os/oraclelinux:8-slim
ARG NODEREDPASSWORD

RUN microdnf install oraclelinux-developer-release-el8 dnf && dnf groupinstall "Development Tools" -y && dnf install -y nano python39-pip python39-numpy python39-devel wget && dnf clean all -y
RUN pip3 install wheel cython pandas scikit-learn bcrypt && pip3 install tensorflow
RUN wget https://nodejs.org/dist/v18.10.0/node-v18.10.0-linux-arm64.tar.xz

RUN mkdir -p /usr/local/lib/nodejs
RUN tar -xJvf node-v18.10.0-linux-arm64.tar.xz -C /usr/local/lib/nodejs
ENV PATH="/usr/local/lib/nodejs/node-v18.10.0-linux-arm64/bin:$PATH"
RUN npm install -g --unsafe-perm node-red
RUN mkdir -p ~/.node-red

COPY settings_template.js /.
RUN echo $NODEREDPASSWORD >> /provided_password.txt && \
settings_file=$(< /settings_template.js) && \
source /dev/stdin <<<"$(echo 'cat <<EOF'; echo "$settings_file"; printf '\nEOF';)" > /root/.node-red/settings.js && \
rm /provided_password.txt && \
rm /settings_template.js

EXPOSE 1880/tcp
CMD node-red

Create another new file in the same directory called settings_template.js and place the following in it (this is copied and slightly modified from the default settings file that is generated by Node-RED if there’s no settings.js file found):

module.exports = {
flowFile: 'flows.json',
credentialSecret: "$(yes `cat /provided_password.txt` | node-red admin hash-pw | awk '{print $2}')",
flowFilePretty: true,

adminAuth: {
type: "credentials",
users: [{
username: "admin",
password: "$(yes `cat /provided_password.txt` | node-red admin hash-pw | awk '{print $2}')",
permissions: "*"
}]
},
uiPort: process.env.PORT || 1880,
apiMaxLength: '5mb',
httpNodeRoot: '/nodered/',
httpAdminRoot: ‘/nodered/’,
diagnostics: {
enabled: true,
ui: true,
},
runtimeState: {
enabled: false,
ui: false,
},
logging: {
console: {
level: "info",
metrics: false,
audit: false
}
},
editorTheme: {
projects: {
enabled: true,
workflow: {
mode: "manual"
}
},

codeEditor: {
lib: "monaco",
options: {
}
}
},
functionExternalModules: true,
debugMaxLength: 1000,
mqttReconnectTime: 15000,
serialReconnectTime: 15000,
}

If you don’t have podman installed on agent1, go ahead and do so now (sudo dnf install podman). Proceed by creating the container (make sure to use a proper password) on agent1:

# Run in: K3s Agent1 SSH session

$ NODEREDPASSWORD=<your_password_here> podman build --pull --build-arg NODEREDPASSWORD -t nodered:ol8-arm64-v8 .

Now that the container image is built, it’s time to push it to the OCI Container Registry (OCIR):

# Run in: K3s Agent1 SSH session

$ podman login <region>.ocir.io
$ podman image tag nodered:ol8-arm64-v8 <region>.ocir.io/<namespace>/nodered:v1.0.0-ol8-arm64-v8
$ podman push <region>.ocir.io/<namespace>/nodered:v1.0.0-ol8-arm64-v8

The OCIR documentation has different region URLs that are available. Make sure that you use the proper URL for the region you’re working in.

One other word of caution: make sure to use the correct username format (<namespace>/<username> or if you’re using IDCS, <namespace>/oracleidentitycloudservice/<username>). The password will be an Auth Token associated with your account. Take a look at the OCI documentation for more info.

With our container successfully built and pushed to OCIR, it’s time to create a deployment and run Node-RED on our K3s cluster! SSH back to the K3s server (or whatever machine can run kubectl and access the K3s API). From there, create a new file called node-red.yml, placing the following in it (make sure to update the placeholders for the values in your environment):

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodered
namespace: default
labels:
app: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
ver: v1.0.0
spec:
containers:
- name: nodered
image: <region>.ocir.io/<namespace>/nodered:v1.0.0-ol8-arm64-v8
imagePullPolicy: Always
ports:
- containerPort: 1880
protocol: TCP
imagePullSecrets:
- name: ocirsecret
nodeSelector:
kubernetes.io/arch: arm64
---
apiVersion: v1
kind: Service
metadata:
name: nodered-svc
spec:
selector:
app: nodered
ports:
- port: 1880
targetPort: 1880
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nodered-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /nodered
pathType: Prefix
backend:
service:
name: nodered-svc
port:
number: 1880

After making sure to use the correct region, namespace and anything else that might differ for your environment, apply the Node-RED manifest:

# Run in: K3s Server SSH session

$ kubectl apply -f node-red.yml
deployment.apps/nodered created
service/nodered-svc created
ingress.networking.k8s.io/nodered-ing created

Let’s check to make sure that the Deployment and Ingress is ready:

# Run in: K3s Server SSH session

$ kubectl get deployment nodered
NAME READY UP-TO-DATE AVAILABLE AGE
nodered 1/1 1 1 119s

$ kubectl get ingress nodered-ing
NAME CLASS HOSTS ADDRESS PORTS AGE
nodered-ing <none> * <pub_ip> 80, 443 2m25s

Let’s go to https://<pub_ip>/nodered in our web browser and check it out! After calming your browser down (due to the use of self-signed certs) you should see the login screen, then the main Node-RED screen after you login.

Conclusion

This has been a fun journey for me. I hope it’s been helpful and interesting for you! Although this is not a solution I’d consider for production environments, I think it’s a terrific playground for development and testing purposes.

Because K3s offers a rich K8s API and experience, migrating from a lightweight K3s implementation like this to a full-blown, feature-rich and enterprise-grade K8s implementation such as OCI Container Engine for Kubernetes (OKE) is a breeze. This “path forward” is one of the primary advantages I see with using K3s over other lightweight container orchestration engine (especially those that are not based on K8s).

The tight integration and management of OCI Load Balancers and (optionally) OCI Block Volumes is a big plus in my mind. This is made possible by the OCI Cloud Controller Manager (and optional CSI if you opted to install it).

Following up to this point, we’ve put a fair amount of time into building the K3s environment. On the flipside, it’s super easy (and fast) to deploy a new OKE cluster, which makes this my preferred K8s choice. Still, for a very lightweight implementation, this solution serves the purpose.

Until next time, may all your manifests be successfully applied!

Join our Oracle Developers channel on Slack to discuss OKE other topics!

Build, test, and deploy your applications on Oracle Cloud — for free! Get access to OCI Cloud Free Tier!

--

--

Tim Clegg
Oracle Developers

Polyglot skillset: software development, cloud/infrastructure architecture, IaC, DevSecOps and more. Employed at Oracle. Views and opinions are my own.