How to install Keycloak IAM on your Kubernetes cluster, backed by Postgres

In this article I look at installing Keycloak and integrating with a Kong API Gateway inside a Kubernetes cluster to provide an OAuth and OIDC solution for your services.

Martin Hodges
15 min readFeb 17, 2024

In this article I make several references to other articles I have written that set up the basic infrastructure to which I will add Keycloak. If you wish to follow along and set up the same cluster as me, you can find details of the full set of articles here. You can also read more about OAuth and OIDC if you need a refresher on what these protocols provide.

This was intended to be a two part article. In this first part, we install Keycloak and gain access to it from the Internet through the Kong API Gateway.

The second article was to show how to connect Kong to Keycloak but this requires the open-connect plugin and that is available through the Enterprise edition of Kong, which is a paid for version. As I try to minimise costs with the solutions I demonstrate, I will leave that integration to others.

Keycloak in Kubernetes

Identity and Access Management (IAM)

In any Kubernetes cluster running one or more services, those services need to be able to trust the clients connecting to them. Trust means:

  • Identifying who is trying to use the service (Authentication)
  • Knowing whether they are allowed to access the service (Authorisation)

There are a number of protocols for determining authentication and authorisation but in this article we will be using OIDC and OAuth respectively. We will be using the Keycloak IAM solution along with the Kong API Gateway to implement these protocols.

Keycloak

Keycloak is an open-source Identity and Access Management (IAM) system that provides authentication and authorisation information about a user. It has additional features to allow social network login, federated identity management, two factor authentication (2FA) and more.

Kong

Kong is an open-source API Gateway that can enforce the use of authentication on protected endpoints and can use OAuth 2.0 to manage authentication and authorisation. I am assuming that you are following my articles and that you have a Kong API Gateway already installed in your cluster.

Setup

In this article we will be installing Keycloak as a cluster and using it to manage access to a test service via the Kong API Gateway.

Keycloak will keep its data in a Postgres database.

Kong is assumed to already be installed into the cluster. It is also assumed that Postgres is being managed through CloudNativePG, as described in one of my other articles.

Securing connections with TLS

Normally when I introduce a new technology into my cluster, I like to disable the use of Transport Layer Security (TLS) connections in order to remove an obstacle to getting the technology up and running. I then add TLS later.

There are a number of connections we need to secure:

TLS connections
  • Client to gateway
  • Gateway to Kong
  • Gateway to Keycloak
  • Kong to Keycloak
  • Keycloak to Postgres
  • Kong to the microservice
  • Microservice to Postgres

Any of these connections may need a private key, a server certificate and/or a Certification Authority (CA) certificate (that was used to sign the server certificate). In other articles, I explain how to obtain a wildcard certificate from Let’s Encrypt and then how to use it to secure services within your Kubernetes cluster.

Creating a Kubernetes Secret

First we need to create a couple of secrets that the database can use. These secrets will hold the private keys, server certificates and CA certificates.

Once you have obtained or generated your certificates and keys, you should have three files:

  1. server.key — the private key for the server
  2. server.crt — the certificate with the public key for the server
  3. ca.crt — the certificate of the CA that signed the server.crt certificate

From the first two files we need to create a Kubernetes TLS secret. We then create a secret to hold the CA certificate. Execute the following from wherever you use kubectl(in my case I use the k8s-master node).

kubectl create secret tls my-postgres-tls --cert=./server.crt --key=./server.key -n kc
kubectl create secret generic my-postgres-ca --from-file=ca.crt=./ca.crt -n kc

We will use these in the next step.

Creating the Keycloak Postgres database

In a microservice architecture, it is recommended that a separate database is created for each microservice rather than a separate schema per microservice. In this case, Keycloak is our microservice and so we will create a separate database for it. I assume that you have previously installed the CloudNativePG (cnpg) Kubernetes Postgres database operator, which will create the cluster from a configuration file.

To create the database, create the following file (change the secrets to something more secure. You need to base64 encode any secret before including it here):

kc-db-config.yml

apiVersion: v1
kind: Secret
type: kubernetes.io/basic-auth
metadata:
name: pg-keycloak-user
namespace: kc
data:
password: c2VjcmV0X3Bhc3N3b3Jk #secret_password
username: a2V5Y2xvYWs= #keycloak
---
apiVersion: v1
kind: Secret
type: kubernetes.io/basic-auth
metadata:
name: pg-superuser
namespace: kc
data:
password: c2VjcmV0X3Bhc3N3b3Jk #secret_password
username: cG9zdGdyZXM= #postgres
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: kc-db-cluster
namespace: kc
labels:
cnpg.io/reload: ""
spec:
description: "Keycloak database cluster"
imageName: ghcr.io/cloudnative-pg/postgresql:15.1
instances: 1

certificates:
serverCASecret: my-postgres-ca
serverTLSSecret: my-postgres-tls

superuserSecret:
name: pg-superuser

managed:
roles:
- name: keycloak
ensure: present
comment: user for Keycloak application
login: true
superuser: false
passwordSecret:
name: pg-keycloak-user

enableSuperuserAccess: true

startDelay: 30
stopDelay: 100
primaryUpdateStrategy: unsupervised

postgresql:
parameters:
max_connections: '200'
shared_buffers: '256MB'
effective_cache_size: '768MB'
maintenance_work_mem: '64MB'
checkpoint_completion_target: '0.9'
wal_buffers: '7864kB'
default_statistics_target: '100'
random_page_cost: '1.1'
effective_io_concurrency: '200'
work_mem: '655kB'
huge_pages: 'off'
min_wal_size: '1GB'
max_wal_size: '4GB'

pg_hba:
- hostssl all all 10.0.0.0/8 scram-sha-256
- hostssl all all 192.168.0.0/16 scram-sha-256
- hostssl all all 127.0.0.1/32 scram-sha-256
- host all all all reject

bootstrap:
initdb:
database: keycloak
owner: keycloak
secret:
name: pg-keycloak-user
postInitApplicationSQL:
- create schema keycloak

storage:
size: 10Gi
storageClass: nfs-client

The pg_hba configuration controls access to the database. Note that this assumes your subnet is on 10.0.0.0/8. You can find your range with:

kubectl cluster-info dump | grep -m 1 service-cluster-ip-range

It also assumes that your Pod network is on 192.168.0.0/16. You can find your range with:

kubectl cluster-info dump | grep -m 1 cluster-cidr

This creates:

  • A normal user called keycloak
  • A superuser called postgres
  • A Postgres database cluster called kc-db-cluster
  • A namespace for Keycloak called kc
  • A database called keycloak
  • A schema called keycloak within this database

Now you can create the database cluster with:

kubectl create namespace kc
kubectl apply -f kc-db-config.yml

As a test, we will add a port forward and try and access the database from a client, such as DBeaver. As we are using a wildcard certificate for a custom domain, it is worth adding the following entry into your development machine’s /etc/hosts file (Mac/Linux), replacing the k8s-master IP address and custom domain as required:

<K8S MASTER IP ADDRESS> pgsql.<CUSTOM DOMAIN>

Now start a port forward (remember to include the IP address for your k8s-master IP address):

kubectl port-forward svc/kc-db-cluster-rw -n kc 5432:5432 --address <K8S MASTER IP ADDRESS>

Now attach to the database using your client. You can attach to the host you added earlier over TLS/SSL, eg:pgsql.requillion-solutions.com.au:5432, as the postgres user with secret_password (or whatever you chose to add) and you should also check the keycloak user too.

Note that if you have any problems authenticating you should look at the logs of the Postgres database pod and the CNPG Postgres operator. I generally fond problems of this nature are related to the base64 encryption, which sometimes encrypts a carriage return into the passwords and similar issues.

Once we have a Postgres database setup, we can continue with Keycloak.

Installing Keycloak as a cluster

As with Postgres, we will use an operator to install Keycloak. Kubernetes operators are applications that manage another application. Actions the operator may take include installation, configuration, fail-over, synchronisation, backups etc.

Install the operator

Before we can load the operator, we need to install the required Custom Resource Definitions (CRDs).

From the place where you run kubectl, run the following:

kubectl apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/23.0.6/kubernetes/keycloaks.k8s.keycloak.org-v1.yml
kubectl apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/23.0.6/kubernetes/keycloakrealmimports.k8s.keycloak.org-v1.yml

These will install the k8s.keycloak.org/v2alpha1 APIs for Keycloak.

We created the kc namespace earlier. Now install the Keycloak operator with:

kubectl apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/23.0.6/kubernetes/kubernetes.yml -n kc

This will install the operator into the kc namespace:

kubectl get all -n kc

Once everything has started, this should give you something to similar to:

NAME                                     READY   STATUS    RESTARTS         AGE
pod/kc-db-cluster-1 1/1 Running 0 10h
pod/keycloak-operator-74c65b884d-l4ps9 1/1 Running 2558 (16h ago) 11d
pod/my-kc-0 1/1 Running 0 60m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kc-db-cluster-r ClusterIP 10.106.130.111 <none> 5432/TCP 10h
service/kc-db-cluster-ro ClusterIP 10.97.174.128 <none> 5432/TCP 10h
service/kc-db-cluster-rw ClusterIP 10.108.60.40 <none> 5432/TCP 10h
service/keycloak NodePort 10.101.157.174 10.240.0.19 8383:30181/TCP,8443:30182/TCP 24h
service/keycloak-operator ClusterIP 10.107.216.165 <none> 80/TCP 11d
service/my-kc-discovery ClusterIP None <none> 7800/TCP 61m
service/my-kc-service ClusterIP 10.99.237.183 <none> 8080/TCP,8443/TCP 61m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/keycloak-operator 1/1 1 1 11d

NAME DESIRED CURRENT READY AGE
replicaset.apps/keycloak-operator-74c65b884d 1 1 1 11d

NAME READY AGE
statefulset.apps/my-kc 1/1 61m

Once the operator has been installed and is running, it is now ready to install the application itself.

Install Keycloak

To install Keycloak we need to create a resource file that uses the k8s.keycloak.org/v2alpha1 API we installed earlier. This will allow us to configure certain aspects of the installation.

Create the following file (replace the < > fields with your own values):

kc-config.yml

apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
name: my-kc
namespace: kc
spec:
instances: 1
db:
vendor: postgres
host: kc-db-cluster-rw-ns-kc.<CUSTOM DOMAIN>
port: 5432
database: keycloak
schema: keycloak
usernameSecret:
name: pg-keycloak-user
key: username
passwordSecret:
name: pg-keycloak-user
key: password
http:
httpEnabled: true
tlsSecret: kc-tls
hostname:
hostname: kc.<CUSTOM DOMAIN>:30182

Check that the service is up and running with each of these commands:

kubectl get services -n kc
kubectl get pods -n kc
kubectl logs my-kc-0 -n kc -f

Keycloak can take a few minutes to start up so the last line above is worth using. It follows the logs (-f) and you can wait until it says Profile prod activated, without any errors.

Accessing the Keycloak portal

Create a Service to provide access

Later we will see how to create an external path to Keycloak via Kong and the gateway server. For now we will connect directly to the Keycloak instance.

The Keycloak operator does not support changing the type of Service it creates. This means we have to create our own Service. Create the following file (remember to replace < > fields with your own values):

kc-svc-config.yml

apiVersion: v1
kind: Service
metadata:
name: keycloak
namespace: kc
labels:
name: keycloak
spec:
ports:
- name: http
port: 8383
targetPort: 8080
nodePort: 30181
- name: https
port: 8443
targetPort: 8443
nodePort: 30182
externalIPs:
- <K8S MASTER IP ADDRESS>
selector:
app: keycloak
type: NodePort

The selector ensures that this Service will be connected to your Keycloak instance.

Note that we are exposing both HTTP and HTTPS ports for now. I would recommend removing the HTTP (port 30181) if it is not needed.

Also note that we are using the same port number (30182) that we used in the hostname in our kc-config.yml file above.

Connecting to Keycloak

Now we have a NodePort Service exposed on the k8s-master node, we need to connect to it. To do this, you will need to add an entry to your /etc/hosts file so that the browser request matches the customer host in your wildcard certificate.

<K8S MASTER IP ADDRESS> kc.<CUSTOM DOMAIN>

In my case, I add:

10.240.0.19 kc.requillion-solutions.com.au

You should now be able to access the admin User Interface (UI) at (in my case):

https://kc.requillion-solutions.com.au:30182

Log in to the admin User Interface (UI)

To log in as the administrator (admin), you will need the password that the operator created for you. You can find this with:

kubectl get secret my-kc-initial-admin -n kc -o jsonpath={.data.password} | base64 -d && echo

Use admin as the username and this password to log in to the administrator console.

You should see a tab showing you are in the Master Realm.

If you do, congratulations are in order as your Keycloak instance is up and running.

We are not finished yet though. We must now make this accessible from outside the cluster through our ingress NGINX gateway and through Kong.

Adding in Kong

Accessing the Keycloak portal over the Virtual Private Network (VPN) does not allow your users to log in. They neeed to be able to access it from the Internet. For that we need to configure our NGINX and Kong to route requests to Keycloak.

For those of your who have been following me, you may remember that I am using an Austrlian bare-bones cloud provider, called Binary Lane, who does not provide a CloudNative LoadBalancer. Instead, I have deployed an NGINX reverse proxy on to an edge server that provides an ingress point into my Virtual Private Cloud (VPC) from the Internet.

NGINX ingress gateway

The article on installing Kong also includes the configuration of the NGINX ingress point. In that article we used a fake domain name but we can now replace this with our custom domain name that matches our wildcard certificate (in my case this is requillion-solutions.com.au which we can use as iam.requillion-solutions.com.au).

We want our NGINX ingress gateway to direct all traffic to our Kong API Gateway. Log in to the gw server as root and then create the following file (remember to replace the fields in < > with your own values):

/etc/nginx/sites-available/<custom domain>.conf

# Note these are only required if they are not already included elsewhere

upstream k8s_cluster {
server <k8s-master>:32001;
server <k8s-node1>:32001;
server <k8s-node2>:32001;
}

server {
listen 80;
listen [::]:80;

server_name *.<CUSTOM DOMAIN>;

location / {
proxy_pass http://k8s_cluster;
include proxy_params;
}
}

If you have not already created the proxy_params file, do so now:

/etc/nginx/proxy_params

proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

This passes through the Host header and other details, which we need for the routes to be effective.

Now enable the site, test the config and then restart NGINX:

ln -s /etc/nginx/sites-available/<CUSTOM DOMAIN>.conf /etc/nginx/sites-enabled/
nginx -t
systemctl restart nginx

You can now test this with the following from your development machine:

curl -H "Host: iam.<CUSTOM DOMAIN>" <GATEWAY PUBLIC IP ADDRESS>

It should give the following as we have no route set up:

{
"message":"no Route matched with those values",
"request_id":"f2a67d36437acf1e34486c0e05b4032b"
}

Configure Kong route

Now our requests are reaching Kong, we must now configure an appropriate route in the API Gateway. If you have been following my article on Kong, you will have a test service accessible at hello-world-1-svc in the default namespace

First we need to tell the Kubernetes Gateway resource about our custom domain. Whilst we are here, we may as well match again our wildcard custom domain. Edit this file from my previous article and add this section to the bottom as a new virtual host to listen for (replace < > fields with your values):

kong-gw-gateway.yml

  - name: <CUSTOM DOMAIN>-selector
hostname: "*.<CUSTOM_DOMAIN>"
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All

This should give:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong-gateway
namespace: kong
spec:
gatewayClassName: kong-class
listeners:
- name: world-selector
hostname: worlds.com
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
- name: <CUSTOM DOMAIN>-selector
hostname: "*.<CUSTOM_DOMAIN>"
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All

Now we need to update our Kubernetes Gateway:

kukubectl apply -f kong-gw-gateway.yml

You can see your new domain is configured with:

kubectl describe gateway kong-gateway -n kong

Now we can configure a route for this host. Create the following file (replace < > fields with your own values):

kc-hw-1-route.yml

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example-1
annotations:
konghq.com/strip-path: 'true'
spec:
parentRefs:
- name: kong-gateway
namespace: kong
hostnames:
- iam.<CUSTOM DOMAIN>
rules:
- matches:
- path:
type: PathPrefix
value: /world1
backendRefs:
- name: hello-world-1-svc
port: 80
kind: Service

Now deploy it with:

kubectl apply -f kc-hw-1-route.yml

With this in place, from your development machine, you should be able to enter:

curl -H "Host: iam.<CUSTOM DOMAIN>" <GATEWAY PUBLIC IP ADDRESS>/world1

The result should now be a valid response.

<html>
<h2>Hello world 1!!</h2>
</html>

Now you can either add iam.<CUSTOM DOMAIN> to your Domain Name host’s DNS for public access or you can add it to your local hosts file on your development machine. Either way, once done, you can enter this into your browser:

http://iam.<CUSTOM DOMAIN>/world1

Note: ensure that your address is HTTP and not HTTPS. The latter will give you a site not found as we have not yet enabled TLS.

Establishing an HTTPS connection

In this step, we will ensure that the NGINX ingress service will accept HTTPS connections over TLS using our wildcard certificate.

Adding your certificate and key to NGINX

The first thing to do is to copy your server.crt and server.key from earlier to your /etc/ssl folder on your NGINX server. It is best to rename them to <CUSTOM DOMAIN>.crt and <CUSTOM DOMAIN>.key so it is clear what they are for, especially if you want to use otehr domains with your NGINX server.

As these files are text files, the easiest way to transfer them is actually to cut and paste them from one server to the other as text.

Add your HTTPS route

In this solution we are going to terminate our TLS connection at NGINX.

Find the NGINX site configuration file we had earlier and replace it with (replacing the < > fields with your own values):

server {
listen 443 ssl;
listen [::]:443 ssl;

server_name *.<CUSTOM DOMAIN>;

location / {
proxy_pass http://k8s_cluster;
include proxy_params;
}

ssl_certificate /etc/ssl/<CUSTOM DOMAIN>.crt;
ssl_certificate_key /etc/ssl/<CUSTOM DOMAIN>.key;
}

server {
listen 80;

server_name *.<CUSTOM DOMAIN>;

return 302 https://$host$request_uri;
}

The first server stanza provides our HTTPS termination point and otherwise acts just as our earlier HTTP configuration did. Note that we tell NGINX where to find our certificates.

The second server stanza is our HTTP connection and tells the browser to use an HTTPS connection instead. This would normally do a 301 (permanent redirect) but I have chosen 302 here so that we can eb flexible whilst we experiment. Browsers can remember 301 responses and go there straight away, making it hard to go back to an HTTP service if required.

Now test your configuration and then restart with it:

nginx -t
systemctl restart nginx

Now try your browser again with an HTTP request. You should see a Hello World response but this time you should notice that the browser has changed to an HTTPS address!

Ok, we are on our way to securing our connections.

Connecting Kong to Keycloak

Now we have a valid HTTPS route to a test service, we now want to access our Keycloak instance.

This does not require any further changes to NGINX. We have to point a route, lets say / to our Keycloak service using a Kubernetes HTTPRoute. Create the following file (you should know the drill with < > by now!):

kc-auth-1-route.yml

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: kc-auth-1
namespace: kc
annotations:
konghq.com/strip-path: 'true'
spec:
parentRefs:
- name: kong-gateway
namespace: kong
hostnames:
- iam.<CUSTOM DOMAIN>
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: my-kc-service
port: 8080
kind: Service

It is worth noting here that the route is created in the kc namespace. This means we do not have to specify where the backend service is with its own namespace setting as kc will be assumed. Which is what we want.

Now we have set up NGINX to Kong and Kong to Keycloak. There is one more thing we have to do — we have to tell Keyclock where its Admin UI exists in the real world.

At this point I need to issue warnings about access to the Admin UI should be restricted (may be even behind your OpenVPN connection) etc. However, for this deployment I want to get you up and running in a way that helps you understand some of the configurations you need to make to create the architecture you decide on.

We now change the Keycloak configuration with a single change.

kc-config.yml

apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
name: my-kc
namespace: kc
spec:
instances: 1
db:
vendor: postgres
host: kc-db-cluster-rw-ns-kc.<CUSTOM DOMAIN>
port: 5432
database: keycloak
schema: keycloak
usernameSecret:
name: pg-keycloak-user
key: username
passwordSecret:
name: pg-keycloak-user
key: password
http:
httpEnabled: true
tlsSecret: kc-tls
hostname:
adminUrl: 'https://iam.<CUSTOM DOMAIN>'
hostname: iam.<CUSTOM DOMAIN>
strict: true

You will see that the hostname section has been changed to include the reference to our custom domain. This is necessary so that Keycloak knows how to construct URLs that work in the Internet.

An observation that I have made is that Keycloak tends to remember these settings (probably in its database). I have found that I have had to delete and recreate the postgres schema to get it to accept this change. This deletes all your data and so should be done before you begin the set up of Keycloak.

You should now be able to go to your browser and enter:

https://iam.<CUSTOM DOMAIN>

You should then be presented with the welcome screen from Keycloak and from there you should be able to access the Admin portal.

Congratulations, you now have a working Keycloak implementation on your Kubernetes cluster!

Summary

A quick recap on what we achieved in this article:

We…

  1. created a namespace
  2. installed a Postgres database cluster for Keycloak to use
  3. tested access to the database
  4. installed a Keycloak operator
  5. configured Keycloak and installed it
  6. proved it was working by accessing the UI directly
  7. configured our NGINX ingress gateway to access a test service
  8. created and installed a gateway resource
  9. configured our NGINX ingress gateway to use HTTPS with our custom domain
  10. modified the configurations to point to Keycloak
  11. tested to ensure access to the Keycloak admin UI

If you found this article of interest, please give me a clap as that helps me identify what people find useful and what future articles I should write. If you have any suggestions, please add them in the comments section.

--

--