Deploying OpenFaaS on Kubernetes — AWS

Today we’ll be deploying OpenFaaS using Kubernetes as a container orchestrator. OpenFaaS is Serverless Functions Made Simple, and is currently the most easy to deploy Open Source Serverless platform out there.

OpenFaaS is Serverless Functions Made Simple

Typically Docker Swarm is used as the orchestrator, but the demand for OpenFaaS on Kubernetes was so high that Kubernetes is officially supported.

In short, the steps are:

  1. Launch one or more instance(s) on AWS to host the Kubernetes cluster. We’ll leverage the spot market for cost savings.
  2. Deploy Kubernetes on the instance(s)
  3. Deploy OpenFaaS on Kubernetes.
  4. Deploy and test some functions on the OpenFaaS Store

Keep in mind that this guide is specifically for AWS. If you’d like to deploy on your local machine, check out Getting Started With OpenFaaS on MiniKube.

Let’s get started.

1. Get the instance from AWS

Go to the EC2 section of AWS, and click the “Spot Requests” section in the side panel.

Next, click the big “Request Spot Instances” blue button in the top-left area.

On the Request Spot Instances page, leave Request Type as “Request” and Target Capacity as 1. In the Launch Template section, for AMI choose Ubuntu 16.04, and for instance type we’ll use m3.medium.

Choose m3.medium for the Kubernetes master node.

8GB is fine for the root volume. Create a security group allowing ports 22, 31112, and 6443 for ingress. Also create or specify an existing key-pair file, so that we can SSH in to the instance.

Your security group should look like this.

Optionally, add a tag with key of “Name” and value of “k8smaster”. Leave the rest as the default, and optionally download the JSON file of your configuration. You can later launch instances using the AWS CLI tool with the following command:

$ aws ec2 request-spot-fleet --spot-fleet-request-config file://<path_to_config.json>

Click the “Instances” section in the left-hand pane, and you can watch your instance come online. When it’s ready, copy the Public IP address and ssh into it using the key-pair file you specified at the configuration stage.

ssh -i <key-pair.pem> ubuntu@<Public-IP>

Once you’re connected to your instance we can get started with the next section.

2. Setting up Kubernetes on the AWS instance

First let’s prep the machine by installing some necessary components. Run the following commands to enter superuser mode, install some necessary components from this gist, then exit back into the ubuntu user.

sudo su
curl -sSL | sh

Now that you’re back in the ubuntu user, we’re ready to deploy Kubernetes:

sudo kubeadm init --kubernetes-version stable-1.8

This command will take a minute to run. When it’s done, you should see an output that includes the following info:

Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
  mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 413f8f.2b28c17de6424ce9 --discovery-token-ca-cert-hash sha256:39c2d10bebbce0cbbd803bcdc8f4dfc45ac430644403946c522d22e011147300

Let’s follow their instructions, and copy and paste those three commands (starting with mkdir -p) into your terminal. This will set up the necessary permissions for our ubuntu user to access the cluster. You can also follow the rest of the instructions to connect additional EC2 instances to this Kubernetes cluster.

Finally, we’ll need a networking layer for the cluster, to allow inter-pod communication. We’ll leverage Weave for this:

kubectl apply -f "$(kubectl version | base64 | tr -d '\n')"

We’ll need to run this next command to allow container placement on the master node. If you are going to attach worker nodes to the cluster, this is not necessary.

kubectl taint nodes --all

(Not a typo with the - at the end of the command)

Let’s confirm that the cluster is running. Run the following command:

kubectl get all -n kube-system

And you should get an output of all the resources deployed on the Kubernetes cluster, which (after a few minutes of loading) will look something like this:

ds/kube-proxy 1 1 1 1 1 <none> 18m
ds/weave-net 1 1 1 1 1 <none> 2m
deploy/kube-dns 1 1 1 1 18m
NAME                     DESIRED   CURRENT   READY     AGE
rs/kube-dns-545bc4bfd4 1 1 1 18m
deploy/kube-dns 1 1 1 1 18m
ds/kube-proxy 1 1 1 1 1 <none> 18m
ds/weave-net 1 1 1 1 1 <none> 2m
NAME                     DESIRED   CURRENT   READY     AGE
rs/kube-dns-545bc4bfd4 1 1 1 18m
NAME                                          READY     STATUS    RESTARTS   AGE
po/etcd-ip-172-31-25-181 1/1 Running 0 18m
po/kube-apiserver-ip-172-31-25-181 1/1 Running 0 18m
po/kube-controller-manager-ip-172-31-25-181 1/1 Running 0 18m
po/kube-dns-545bc4bfd4-zgxpl 3/3 Running 0 18m
po/kube-proxy-j4vcg 1/1 Running 0 18m
po/kube-scheduler-ip-172-31-25-181 1/1 Running 0 17m
po/weave-net-j5wbf 2/2 Running 0 2m
NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
svc/kube-dns ClusterIP <none> 53/UDP,53/TCP 18m

We’re ready to deploy OpenFaaS on the cluster.

3. Deploying OpenFaaS on Kubernetes using faas-netes

The most important step here is to get the faas-netes repo:

git clone

cd Into this directory. Deploying OpenFaaS is as simple as running the following:

kubectl apply -f faas.yml,monitoring.yml,rbac.yml

Check kubectl get pods , wait a few minutes, and you should see something like the following:

NAME                           READY     STATUS    RESTARTS   AGE
alertmanager-77b4b476b-bh8wn 1/1 Running 0 1h
faas-netesd-64fb9b4dfb-rqlx4 1/1 Running 0 6m
gateway-6b57cfbd46-gsxt9 1/1 Running 0 6m
prometheus-7fbfd8bfb8-pfgnr 1/1 Running 0 1h

The container state will be ContainerCreating until the docker images for these containers are done downloading.

Test out that this worked by pointing your web browser to http://<YOUR_EC2_PUBLIC_IP>:31112 . You should get a page that looks like this:

If you see this UI page, congratulations! We’re ready to have some fun with serverless functions.

4. Deploying serverless functions using the OpenFaaS store.

The moment we’ve all been waiting for. Click ‘Deploy New Function’ to get started.

The OpenFaaS Store makes deploying serverless functions trivial.

As you can see, we have a few options for what functions to deploy to test out our cluster. Let’s try out the ‘Figlet’ function. Click “Figlet” to highlight the function in the store, then click DEPLOY in the lower-right corner.

You will have to wait a minute or two for the docker image for this function to be pulled to your cluster (which will happen automatically). You may also have to refresh the page of your OpenFaaS portal to see the function appear:

Ready to use Figlet.

Test out Figlet by clicking on it, and putting some text into the “Request body” field, then click INVOKE:

Figlet deployment on Kubernetes, fully functional.

Congratulations, you now up and running with Kubernetes on AWS. Try out some of the other functions in the store (or deploy your own images manually) to find out why OpenFaaS is Serverless Functions Made Simple.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.