Setup OpenFaaS on k3s with private docker registry
Purpose of this document
OpenFaaS offers Function as a Service hosted on Kubernetes or Docker Swarm. Code is stored in files, management is done using CLI. This makes the product interesting for many companies that do not want to run or store code and binaries in a cloud.
With this article I want to show how easy it is to setup a running environment on your local machine or on your own Kubernetes cluster.
OpenFaaS — How it works:
With CLI you can create a new project based on templates for all supported languages. Build command will create a docker image, but you don’t have to care about the build process itself. Image is pushed to registry and deployed as a function using CLI, too. Everything is done with CLI…
Advantages:
- as code is stored in files, you can reuse your existing source code management software
- templates could wrap nearly all legacy code, executed in container
- auto-scaling available, scaling to zero is possible
- no vendor lock, as it could be deployed anywhere
Let’s get it running:
You can use you’re existing Kubernetes cluster or you might want to setup a new one with k3s:
curl -sfL https://get.k3s.io | sh -
Master should be available, please check with:
kubectl get nodes
Checkout OpenFaaS:
git clone https://github.com/openfaas/faas-netes
cd faas-netes
Create Namespaces:
kubectl apply -f ./namespaces.yml
Create secret and store user and new random password (required for UI):
PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)
kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password="$PASSWORD"
echo $PASSWORD
Deploy OpenFaaS:
kubectl apply -f ./yaml
Confirm if everything is deployed correctly:
kubectl get deployments --namespace openfaas
Normally OpenFaaS is using Dockerhub, but we want to use our own private registry, so we have to deploy it and expose the port:
kubectl run registry --image=registry:latest --port=5000 \
--namespace openfaas
kubectl expose deployment registry --namespace openfaas \
--type=LoadBalancer --port=5000 --target-port=5000
[EDIT] not sure if this easy registry setup still works, you might have to use a newer tutorial to get an private registry. This might also bring the requirement to use credentials and imagePullSecret in deployment file.
Done — Web-UI should be available:
http://localhost:31112
Let’s deploy a function:
CLI Login:
faas-cli login --gateway http://localhost:31112 --password <pwd>
Create a new function from a template — choose a language (more available):
faas-cli new --lang python hello-python
faas-cli new --lang java8 hello-java8
faas-cli new --lang node hello-node
Open yml file and change gateway and image as we want to use our own private registry:
gateway: http://localhost:31112image: localhost:5000/hello-python:latest
image: localhost:5000/hello-java8:latest
image: localhost:5000/hello-node:latest
Now we have to build, push and deploy the function, this could be done with one command:
faas-cli up -f ./hello-python.yml
faas-cli up -f ./hello-java8.yml
faas-cli up -f ./hello-node.yml
Congratulations! — now checkout and test your function in UI :-)
Summary:
As you can see it is very easy to setup a serverless environment on your local computer or inside of your own data center. You will be able to keep all of your data private and safe and use all of the benefits of serverless technology.
Additional Information:
Instead of using the up command you can do it setp-by-step:
faas-cli build -f ./hello-python.yml
faas-cli build -f ./hello-java8.yml
faas-cli build -f ./hello-node.ymlfaas-cli push -f ./hello-python.yml
faas-cli push -f ./hello-java8.yml
faas-cli push -f ./hello-node.ymlfaas-cli deploy -f ./hello-python.yml
faas-cli deploy -f ./hello-java8.yml
faas-cli deploy -f ./hello-node.yml