YAHT: How to enable LDAP authentication in ECK with groups and spaces

Alexei Ivanov
6 min readJun 21, 2024

--

Image: static-www.elastic.co

Welcome to the YAHT club — a series where I write simple, doable how-to’s. The series is dedicated to cloud-native operations and development, but other topics might be covered as well. YAHT, as you might have guessed, stands for Yet Another How-To (with reference to YAML, which you will see a lot here). So, let us dive into it.

Background

Recently, while working on a project I was asked to enable LDAP authentication on the ECK cluster in production. This appeared as a pretty straight forward task, however, little did I know there were some major challanges to overcome. So, the use case was simple: a decent number of users (15–20 developers), who wished to authenticate to the ECK cluster using their LDAP credentials, retaining permissions and access rights of their existing user accounts (registered via Kibana). The only complication was that cluster authentication could not be set up from scratch due to a number of service accounts (service users) and roles that were already present and needed to be preserved. In other words, LDAP authentication in this case was required to land ontop of the security settings already in place.

Prerequisites

First off, LDAP authentication is a paid feature, so if you have not acquired a valid licence for your cluster earlier, you will need to do so before can proceed. A “valid” license, by the way, is in this case the “Enterprise” license. Yes, the good old “Platinum” or “Gold” licenses cannot be used. You will not find this information in the current version of Elastic documentation, as (apparently) this rule has been in effect since forever and at some point Elastic simply stopped “advertising” it (https://www.elastic.co/guide/en/cloud-on-k8s/1.0/k8s-licensing.html).

Some things you learn the hard way…

Next, you will need a running ECK cluster. Even though this may seem obvious, let us not forget that ECK is operator controlled and you will need to install the Elasticsearch operator for this. If you do not have one, you are in luck - you will have a lot of fun installing it and setting things up for the first time. Since the procedure of how to install and set up Elasticsearch operator has been thoroughly covered by other sources I will not include it in this guide. And it should not be that hard to find.

Finally, you will need an LDAP server with groups and users. Since this article caught your attention I will assume you already have one.

Procedure

Now, on to the good part. Since I was doing this on an Openshift cluster, I will be using oc CLI throught this guide. However, since the “K” in “ECK” stands for “Kubernetes”, the same procedure certainly applies to any Kubernetes installation as well.

I will try and cover this as detailed as possible, but some parts of the configuration will be redacted for obvious and maybe not so obvious reasons.

The namespace where the ECK cluster resides in this guide is called “elastic-system”, but yours can be different.

Step 1

Let us apply the license first. If you have already done so, you can skip this step. You can use the following command, assuming the license is contained within the file called “license.json”:

$ oc -n elastic-system create secret generic eck-license --from-file=license=license.json --dry-run=client -o json | oc apply -f - && oc -n elastic-system label secret eck-license "license.k8s.elastic.co/scope"=operator

Check if the license has been applied correctly:

$ oc -n elastic-system get cm elastic-licensing -o json | jq .data

You will be able to observe output similar to this:

...
"eck_license_expiry_date": "2025-30-05T23:59:59Z",
"eck_license_level": "enterprise",
...

Step 2

Create a secret containing your LDAP bind password:

$ oc -n elastic-system create secret generic elasticsearch-ldap-credentials --from-literal=xpack.security.authc.realms.ldap.ldap1.secure_bind_password=$(cat ldap_bind_pass)

Step 3

Create a configmap containing the role mapping file:

$ oc -n elastic-system create cm role-mapping --from-file=role_mapping.yml

You can later verify if the roles were actually registered in your ECK cluster using this command:

$ oc -n elastic-system get secret elasticsearch-es-xpack-file-realm -o json | jq -r '.data."roles.yml"' | base64 -d | less +G

Step 4

Create a secret containing the custom roles that will be used to provide different permissions to your users. Here, I am using a very basic set of permissions, but you can customise this, of course, to suit your needs. Make sure the file in this stringData field is named roles.yml and every role is named exactly as it is in the roles_mapping.yml used earlier!

$ oc -n elastic-system create -f custom-roles-secret.yaml

Take note of the values in the resources fields - this will be your Kibana Spaces!

Step 5

Add Kibana Spaces. If you have Spaces from earlier, you can reuse them — simply put them in your “elasticsearch-custom-roles” secret resource (see previous step).

Kibana Spaces is a neat way of organising your user groups and what they will be able to access in the Kibana UI. To separate the many different development teams we were working with, this was absolutely brilliant.

To add a Kibana Space using the API, you can use this command:

$ oc -n elastic-system exec -it $(oc -n elastic-system get po -l app=kibana -o name) -- curl -X POST "localhost:5601/api/spaces/space" -H "Authorization: ApiKey $(cat kibana-api-key)" -H "kbn-xsrf: true" -H "Content-Type: application/json" -d'{"id":"team-awsome","name":"Team awsome ","initials":"TA","disabledFeatures":[],"imageUrl":""}'

Otherwise, use the Kibana UI.

Step 6

Introduce your newly created resources to your ECK instance. The amendments you need to make are quite simple, and are illustrated in the following manifest:

ECK with LDAP integration

Now, let us walk through the changes.

To begin with, we need to add our “role_mapping.yml” configmap resource. You do so by adding it to this path:

xpack.security.authc.realms.ldap.ldap1.files

And to

podTemplate.spec.containers[0].volumeMounts

As well as to

podTemplate.spec.volumes

For every node set that you have.

After this, you will need to add the “cacert.pem” file reference there from the “ca-config-map” configmap resource. You can fetch the “cacert.pem” file from any other secret resource in one of your namespaces which uses the same LDAP server. For instance, I used this configmap resource from another namespace (in my case called “ldap-sync”) and simply copied it to “elastic-system” namespace:

$ CERT=$(oc -n ldap-sync get cm ca-config-map -o json | jq '.data | with_entries(if .key == "ca.crt" then .key = "cacert.pem" else . end)'); oc -n ldap-sync get cm ca-config-map -o json | jq ".data = $CERT" | jq 'del(.metadata.annotations,.metadata.labels,.metadata.creationTimestamp,.metadata.resourceVersion,.metadata.uid)' | jq '.metadata.namespace = "elastic-system"' | oc apply -f -; echo CERT=""

After those amendments are made, the Elasticsearch operator should begin to restart your Elasticsearch cluster. If this does not happen, delete one pod after another to do this manually. You will have to do so if you make any changes to the configmap resource containing “role_mapping.yml” file, or if you update the “ca-config-map” configmap resource. Note that making changes to the configmap resources will not trigger an automatic restart of your Elasticsearch cluster, but the changes will not be applied until you have.

That’s it! Now you have integrated your LDAP with the Elasticsearch cluster and you can now use you LDAP credentials to authenticate and authorise to your cluster. Hopefully I have covered all the steps in great enough detail for you to be able to follow them. If not, please let me know in the comments, or reach out to me on LinkedIn. Happy DevOpsing!

--

--

Alexei Ivanov

Alexei Ivanov is a senior fullstack developer and cloud engineer with Sopra Steria based in Oslo, Norway.