Deploy Cortex on EKS Cluster

Priyankar Prasad
3 min readNov 5, 2021

--

Horizontally scalable, highly available, multi-tenant, long term storage for Prometheus.

Cortex is mainly used to store Prometheus metrics for a longer time period, and by utilizing the multi tenancy feature of Cortex, multiple Prometheus instances can be configured to send metrics to Cortex side with unique tenant names, so that Cortex will store those metrics separately in the main storage. Grafana can be used to visualize the metrics by querying the data directly from Cortex API endpoint. Cortex supports PromQL queries, so that Prometheus type data sources can be configured in Grafana.

Block Storage Architecture

More detail on Cortex Architecture is explained in Cortex Documentation. Refer the service section of the document to get more understanding about the components/services of Cortex. If you are planning to deploy Cortex in Production, please read the tips from the Cortex developers also.

In this article, I’m going to explain how to deploy Cortex v1.9 using helm chart with a block storage engine, I’m creating a S3 bucket as the block storage using Terraform codes. As a pre-requisite Cortex require a Key-value store to store the hash ring, Consul will be used for this requirement.

1st Step : Install Consul

$ kubectl create ns consul-demo
namespace/consul-demo created
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
$ helm install consul hashicorp/consul -f ~/Documents/consul-values.yaml -n consul-demo
Output of Helm Install

The consul-values.yaml can be found in the GitHub repository. Replica count and resource limits can be changed as per the requirement. Persistent volume capacity should increased if the Cortex rings have considerable amount of data.

2nd Step : Create the main block storage

I’m using an AWS S3 bucket for this, because I’m deploying Cortex in an EKS cluster. I’m going to create S3 bucket and provide the cortex service account with the read-write access to it using an IAM role. I’m going to use terraform modules to create these necessary components. I have already authenticated to AWS account, so that credentials are not mentioned in the terraform provider.

$ terraform init$ terraform plan$ terraform apply
Output of Terraform apply

If there are no errors, terraform should create above 5 components. Mainly, S3 bucket and IAM role should be created and configured properly. Terraform state file is stored locally here, but it is recommended to store it in a centralized location, like in a S3 bucket.

3rd Step : Deploy Cortex

$ kubectl create ns cortex
namespace/cortex created
$ helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart
"cortex-helm" has been added to your repositories
$ helm install cortex -f ~/Documents/cortex-values.yaml cortex-helm/cortex --version 0.6.0 -n cortex
Output of Helm Install

Cortex v1.9 is installed using the v0.6.0 Cortex helm chart. That is the reason why the version is defined in the helm install command. I’ll write on how to upgrade/install latest version of Cortex in a future article. This cortex-values.yaml can be found in the GitHub repository. I have explained most of the important/custom configurations in the values file to give you a clear understanding. There are more parameters in the default values file which you can check in the Cortex-helm-chart GitHub repository.

Deployed Cortex Components

I have published an article earlier regarding FluxCD, if you are using such kind of continuous delivery tool in K8s, I have created Helm release files for Consul & Cortex and those can be found in the same GitHub repository.

--

--