Knative with Terraform — Serverless Era

Ken Lu
5 min readNov 18, 2018

--

Few months ago, Google released the new serverless solution — Knative in the Google Cloud Next 2018. As a modern framework built on Kubernetes and Istio, Knative facilitates the management of cloud deployments. It provides the ability of autoscaling resources/pods to 0.

Figure1: Knative Ecosystem (Source: Knative GitHub repo)

Terraform is an open source software for developers to plan and build infrastructures in a flash with code. It defines its own markup language which allows users to call the api of multiple cloud solutions including Google Cloud, AWS, Microsoft Azure and Kubernetes.

Recently, I started to work on building a cloud environment with Knative. I tried to combine Knative with Terraform to create a organised pipeline for deployment. I looked through the documentation and realised that Terraform hadn’t provided the service for Knative. A well-structured method couldn’t be found on the internet either. I then decided to temporarily build it by myself with the self-defined resource functionality in Terraform.

Prerequisite

Project Structure

Let’s say we want to build a Node.js server. The whole structure looks as follows :

- deployment/
- gke/
- knative/
- service.yml
- config.domain.yml
- gcp.tf
- cluster.tf
- service.tf
- variables.tf
- main.tf
- backend.tf
- .tf-env
- svc/
- src/
- YOUR_CODE
- index.js
- package.json
- Dockerfile
- .dockerignore

Images

First, we need to write a Dockerfile for building an image :

Dockerfile for the app

If you don’t know much about Docker, I suggest you can have a look at A Docker Tutorial for Beginners written by Prakhar Srivastav. It will give you a general idea on how to build a Docker image. However, don’t worry too much about Docker in here. The only thing we’ll need to know in here is to create a Dockerfile. This will be our service to be deployed on Knative.

For simplicity, we will use the hello world example provided by Knative and recreate an image with it because we want to do everything from scratch. So this Dockerfile will be the only file in your ./svc directory in this example.

Dockerfile for the helloworld-go

After having a script for our image, we can now create a cloudbuild.yml and use the Google Cloud Build service to build our image in the cloud. This way can accelerate the image building process and prevent environmental issue.

cloudbuild.yml

Terraform

Next, we’ll dive into Terraform and start planning our infrastructure. Thanks to the good article — Deploy Kubernetes Apps with Terraform, we have a general idea on how to combine Terraform with Kubernetes. As mentioned, the Kubernetes provider on Terraform hasn’t supported Knative. However, we can still use the built-in API of Google provider in Terraform to create the cluster and nodes/instances/machines.

Terraform State in The Cloud

Terraform will store the state of infrastructure in the local files in default. We want to manage the state in the cloud, so that we’re able to synchronise the state. In this example, we set up the remote storage as the Google Cloud Storage.

backend.tf

Modules

In here, we only specify a module called gke (Google Kubernetes Engine) because we are only using the Google provider.

main.tf

There are several input variables that we want to configure into our module, project, region, credentials, etc. The credentials is the location of our Google Cloud credentials which can be downloaded from the Google Cloud Dashboard. We also default gcp_cluster_count to 1 as we will autoscale the node size from 1 to 10 later. Some other variables can be set up depending on your preference.

GKE

As mentioned, the only provider we are using is google .

gcp.tf

The variables will be passed from the main.tf into the gke module. We generate a variables.tf under ./gke directory to organise them.

variables.tf

Cluster

So far, we have our provider and variables set up. We can now start the cluster construction. We need 2 resources in here, one is google_container_cluster , another one is google_container_node_pool . The former is to plan the overview of our cluster, and the latter is to specify the details of nodes in the cluster including autoscaling. According to the Knative’s official documentation, we set the machine type as the minimal requirement — n1-standard-4 , and other configurations just follow the recommendations.

cluster.tf

Knative

Now, we finally enter the main part — Knative. Yeahhhhh!🎉🎉🎉

The service.yml is used to deploy our apps onto the nodes/machines. image: is where we specify our image URL to be deployed. We’re also able to set up the environment variables for our container. If you’re familiar with Docker Compose, the ideas are similar. You can check out the Knative Serving API spec for the complete configuration.

service.yml

Knative also gives us the ability to config the domain name with the API. Just need to replace helloworld-go.com with whatever you like.

config-domain.yml

Currently, we can only do the Knative deployment through command line. As a result, I hardcoded all the commands into Terraform’s null_resource which allows us to implement self-defined commands (A bit dodgy, but it works😛). We have depends_on for us to clarify the dependencies of resources. We want the resource to wait for the construction of google_container_cluster and google_container_node_pool . triggers gives us the ability to specify the triggering conditions.

The list of provisioner is all the commands for setting up Knative and deploying services. They will be executed in an order from top to the bottom. Our Terraform will only execute null_resource.init once if we don’t change the cluster and execute null_resource.knative every time we try to execute $ terraform apply . The service will be deleted and redeployed in this step.

If you’re interested in the official process, check out Knative Install on Google Kubernetes Engine and Getting Started with Knative App Deployment.

service.tf

Deployment

That’s it! All we need to do now is applying all the settings. We can put the input variables into a file which is called .tf-env and export them as environment variables for convenience.

.tf-env

Before we start deploying, just execute :

$ source .tf-env

Now, if we want to see what will be changed, we can simply do :

$ terraform plan

To apply the changes, and anytime you want to redeploy your service you just need to do :

$ terraform apply

Interact with you app :

$ export IP_ADDRESS=$(kubectl get svc knative-ingressgateway --namespace istio-system --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')$ export HOST_URL=$(kubectl get ksvc helloworld-go  --output jsonpath='{.status.domain}')

Then :

$ curl -H "Host: ${HOST_URL}" http://${IP_ADDRESS}
Hello World: Go Sample v1!

Destroy The Infrastructure

$ terraform destroy

Set Up The Outbound Network

Knative intercepts every outbound network connections in default. If you’d like to allow outbound connections, simply do :

$ gcloud container clusters describe scraper-cluster --zone=us-west1-a | grep -e clusterIpv4Cidr -e servicesIpv4Cidr$ kubectl edit configmap config-network --namespace knative-serving

Then edit the config file and set the ip ranges that you want to intercept :

istio.sidecar.includeOutboundIPRanges: <from>,<to>

Replace <from> with the value of clusterIpv4Cidr , and replace <to> with the value of servicesIpv4Cidr .

Check out the complete documentation — Configuring outbound network access.

Conclusion

This is just a temporary way of integrating Knative with Terraform. I believe Terraform will provide the ability of connecting to the Knative API with their kubernetes provider.

P.S. Don’t run too many servers on your cloud, or you’ll need to pay a lot 🙃

--

--