Fn — Serverless Architecture on Amazon EKS

hateiskawaii
7 min readMar 17, 2020

--

Intro

One of the things I don’t like about a lot of dev-ops frameworks is that a lot of their documentation assumes you know a lot about devops, have a decent amount of experience in general, and so gloss over a lot of stuff. Serverless applications are all the rage (see here: https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=serverless and https://trends.google.com/trends/explore?q=%2Fg%2F11c0q_754d&date=today%205-y&geo=US ) with the rise of AWS Lambda and their competitors in Azure and Google Cloud Platform.

One problem — the market for providers of FaaS (Functions as a Service) products haven’t matured, so there are a lot of inconsistencies with how you write, package, and deploy code between the various providers. Having to refactor your code in between Google Cloud Platform and AWS Lambda? Seems suspect.

Enter, Fn.

Fn is a FaaS framework designed for use with regular ol’ Kubernetes. (for the uninitiated, Kubernetes is a wildly popular container orchestration framework for deploying applications). Meaning: if you know how to use Kubernetes, and know how to deploy clusters to the cloud, you are already most of the way there to having a seamless FaaS development experience from Google to AWS to Azure.

If you don’t know how to use Kubernetes — which I didn’t and to some degree still don’t, it’s still not impossible to get set up, but there are some, sharp edges that need to be ironed out before you can get your stuff deployed. Today I will be showing you how to deploy Fn to a Kubernetes cluster running on Amazon EKS in detail, but the basic steps are as follows.

  1. Get a cluster
  2. Get some nodes
  3. Get kubectl access
  4. Deploy nginx-ingress to your cluster.
  5. Deploy the Fn helm chart.
  6. Set your variables for development.
  7. Write code!

Step 1: Get a cluster.

So the first thing we have to do is to get a cluster, and in order to do that I used the AWS Console. First things first, you need to set up a VPC or Virtual Private Cloud with three public subnets, one for each AZ. The standard 10.0.0.0/16 main address space is Good Enough here, as is the standard 10.0.0.0/24 main public subnet. I also created two public subnets in 10.0.1.0/24 and 10.0.2/24. You’ll need to create two public subnets in the other two AZs for the region you’re deploying in (what’s a region? Good question. AWS partitions their services into regions so you have stuff like us-east-2, us-east-1, us-west-2, eu-west-1, etc etc. Some things transfer cross-region, but most things are compartmentalized to that region alone.). The other concept here are Availability Zones — which are independently-failing groups of AWS services in the same region. The reason for two public subnets is that when you deploy your cluster, EKS policy requires two subnets in two availability zones, and three provides additional redundancy.

There is also the issue of tagging, when you build your subnets (and I’m not sure if EKS will do this for you on creation) they need to be tagged with the following:

kubernetes.io/cluster/<intended-cluster-name>: shared

In addition to VPCs and subnets, there is also the issue of Security Groups

With that out of the way, let’s get on with cluster creation. Cluster creation is really two steps — you want to create a cluster and then create nodes to join your cluster (that we’re later going to deploy to).

In the console, you’re going to go to EKS (Elastic Kubernetes Service, it’s a managed cluster, no server builds here!). You’re going to say “create cluster” and give it a name. (Pay attention to the tag in the block quotes a few paragraphs up — the cluster name needs to be the same as it is in that one or things just won’t work and you won’t know why). Select the subnets and security groups you made upthread, and if you need one make an IAM role for your cluster.

NOTE: You *will* need to be able to assume this IAM role in order to get kubectl access, so pay attention to which one it is.

(if you need to figure out how to specify an IAM role for this purpose use this doc: https://docs.aws.amazon.com/eks/latest/userguide/worker_node_IAM_role.html).

I also recommend not using your root account for this step, because the user you sign on with to assume role will need to have programmatic access to the role — and if you’re adhering to AWS best practices, your root account doesn’t have programmatic access to AWS resources. You should see it go to “creating”, this actually takes a minute or two so be patient.

After that , we need to add some nodes. The first step to adding a managed node group is — you guessed it! IAM again! This role is less important as long as it has the necessary policies you should be fine. The doc is https://docs.aws.amazon.com/eks/latest/userguide/worker_node_IAM_role.html.

NOTE: The only sharp edge I’ve encountered here is that AWS recommends that you use a NEW role for each node cluster.

Once you’re done creating the IAM role, select the cluster you created, and click “Add managed node group” , use the role you created just above, and select the subnets you created in the beginning again. There are two important things here — your nodes need to be of the right size (I use t3.small for mine), and there needs to be enough of them (I use ten). I’m not sure what AWS can do to autoscale Fn because once you have Fn deployed you have “broken out” of the AWS box so I didn’t configure any autoscaling and just set the number of nodes to 10. The last important thing here is the tag setup, which needs to be as follows:

kubernetes.io/cluster/<cluster-name>: owned

Click “Create Node Group” and it should start creating (it’s spinning up ten EC instances on your behalf though, so it might take a minute).

Step 3: Get kubectl access

Once your managed node group is in status Active, we are ready to go to the hard part — which is actually deploying Fn and getting your machine to talk to it. First, some dependencies (I used AWS Cloud9 so some of these were handled for me)

aws cli
helm
aws-iam-authenticator
node.js (latest)(comes with Cloud9)
kubectl
Fn framework binary
configuration of your AWS cli to use the user that you created your cluster with.
The language that you're developing functions with.

Once you have all those installed for your architecture and platform of choice, you need to set up your cluster to accept kubectl commands from you. We do that by using the terminal command:

aws eks --region <your region here> update-kubeconfig <your-cluster-name> --role-arn <arn of the role you created the cluster with>

To check your kubeconfig (which is the file that gets written by that command to allow cluster access) type

cat ~/.kube/config

##NOTE: If you’re storing your kube configuration files somewhere else, file path may not be accurate — but I am assuming you, like me just accept default options whenever possible. In which case that is where things are going to go.

Then you try to connect to your cluster like this:

kubectl get svc

Which should show only one service running (kubernetes).
If you run into trouble here (which I did) there are a few things you can try, try running through https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html

What worked for me was changing my kubeconfig to look like this:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: SNIPPY SNIP
server: SNIP
name: SNIPPY SNIPPED
contexts:
- context:
cluster: SNIPPY SNIPPED
user: SNIPPY SNIPPED
name: SNIPPY SNIPPED
kind: Config
preferences: {}
users:
- name: SNIPPY SNIPPED
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- sarcina-testing
- EKS ROLE NAME
command: aws-iam-authenticator

I didn’t need to change anything to do with the parts that were snipped, so they were edited out for security + brevity.

Step 4: Deploy nginx-ingress to your cluster

nginx-ingress is a Kubernetes concept known as an ingress-controller, which lets one act as a “frontend” for all of the services sitting behind it in the cluster. You need to deploy it to your cluster, and for that I will reference this doc here: https://github.com/helm/charts/tree/master/stable/nginx-ingress

I wasn’t able to deploy with the helm repository, so I cloned the entire repository and did the following:

cd reporoot/charts/stable
helm dep build nginx-ingress
helm install somename nginx-ingress

Once that’s done, you have an ingress controller, which is good! It means that the next step will actually work (the fn helm chart uses nginx-ingress as it’s default ingress controller, and won’t set up correctly if it doesn’t have one)

Step 5: Deploy Fn to your cluster

Follow the instructions here after “installing the chart”:
https://github.com/fnproject/fn-helm

##NOTE: pay attention to this bit too:

LoadBalancer

In order to natively expose the Fn services, you’ll need to modify the Fn API, Runner, and UI service definitions:

  • at fn_api node values, modify fn_api.service.type from ClusterIP to LoadBalancer
  • at fn_lb_runner node values, modify fn_lb_runner.service.type from ClusterIP to LoadBalancer
  • at ui node values, modify ui.service.type from ClusterIP to LoadBalancer

You should get an output that looks like:
https://github.com/fnproject/fn-helm/blob/master/fn/templates/NOTES.txt

Step 6: Configure environment variables for use with FN

That guide works with one exception, the existence of the FN_API_URL environment variable. Instead of using HTTP_PROXY as is suggested in the guide, that will break your build chain and general network connectivity unless you’ve set the cluster to be an internet gateway — instead there’s a variable called FN_API_URL that Fn uses to communicate with its services. To change it perform the following:

export FN_API_URL=<your fn api server external IP address here>

After that the rest of the guide in the NOTES.txt (don’t use that one, use the one that’s output in your console, it fills in all those values for you for easy consumption instead of having to figure it out) should be able to get your first Fn deployed!

Step 7+8: Write code and Deploy!

Thanks for reading, and I hope that this makes your journey much easier. Will be writing another post when I figure out how to get the Fn service endpoint exposed on the internet (and if you already know how — please drop me a line).

--

--