Kubernetes @ FC

EKS, Prometheus, Istio and much more — our use of Kubernetes @ Funding Circle

Mark J Hollands
Funding Circle
3 min readJan 12, 2022

--

Photo by Lars Kienle on Unsplash

Last month I published the first blog in this series entitled “A Brief History of Containers @ FC”. This blog established our container journey at Funding Circle, the platforms we’ve used and how we got to where we are today with Kubernetes.

This entry will introduce the Kubernetes setup we have, the tools we use and how we want to build on this in the future. Next we will deep dive into some of the tech, but for now, we’ll aim for a high level overview.

Photo by Rúben dos Santos on Unsplash

The platform for our platform…

At Funding Circle the majority of our core infrastructure runs on AWS and is provisioned and managed using Terraform, and the same is true for Kubernetes. We use the AWS Elastic Kubernetes Service (EKS) to maintain the control plane of our clusters and Managed Node Groups for provisioning and management of worker nodes. Managed Node Groups abstract away much maintenance of the underlying EC2 resources and provide some useful Kubernetes features such as configuring node labels and taints so they don’t have to be specified in userdata.

Along with the Kubernetes resources, we utilise a number of other AWS services, for example EBS and EFS for cluster provisioned storage and S3 for some application storage.

How many clusters?

There are many approaches to architecting clusters with Kubernetes, much of which depend on multi-tenancy considerations, application requirements, environment isolation etc.

At Funding Circle we decided to separate clusters for the different stages of the application lifecycle:

  • Test cluster — for testing during the development of features. Testing environments can either be provisioned long term, or be time limited by setting a time-to-live (TTL) value. For example — during a CI build.
  • Staging — for testing once a feature has been merged to the main branch, but before going live into production.
  • Production — our live production environment.

These clusters are separated by AWS accounts within our AWS organization.

In addition to these we have two additional clusters for operations, hosting our infrastructure workloads. These clusters are a live operations cluster, and a test environment we use to test infrastructure changes.

In cluster services

There are a number of in cluster services that we use throughout most of our clusters. These include:

  • Istio
  • CoreDNS
  • VPC CNI
  • Cluster Autoscaler and Cluster Overprovisioning
  • Prometheus and Grafana

The core cluster services like CoreDNS and VPC CNI are provisioned within the cluster using Terraform.

All of the remaining services are managed using ArgoCD, with configuration reconciled through our Git repositories. We also use ArgoCD for our cluster hosted applications.

Photo by Johannes Plenio on Unsplash

Continuously evolving

We have been on our Kubernetes journey for a while now, and our platform continues to evolve along with advances from Kubernetes and AWS, best practices in the industry and the continuous flow of new technologies in the industry.

One example of how we evolved the platform is migrating from worker nodes managed manually using EC2 Autoscaling Groups, to using Managed Node Groups. Managed Node Groups still use EC2 ASGs under the covers but management is simplified and Kubernetes specific configuration (for example node labels and taints) can be specified in the resource.

We are constantly looking for new ways to improve our platform including better and greater use of Istio and simplifying our monitoring stack.

--

--