Intro to Hashicorp Consul’s Kubernetes Authentication

Shy Wasserman
14 min readJul 29, 2019

--

Introduction

Huh? Say what?!

That’s right, with Hashicorp’s Consul release of version 1.5.0 at the beginning of May ’19, you can now authenticate to Consul the applications and services living in Kubernetes, natively.

In this guide, we will create a POC step by step, demonstrating this new feature. This guide expects basic knowledge of Kubernetes and Hashicorp’s Consul. Though you may use any cloud platform or on-premise environment, in this guide we will be using Google’s Cloud Platform.

Overview

If we navigate over to Consul’s documentation for it’s auth-method, there’s a concise overview of its purpose and use case as well as some technical details and a high-level overview of the logic. I highly recommend reading that over at least once before continuing since I will be explaining and expanding upon it.

Diagram 1: Consul’s official auth-method overview

So there you are reading over the documentation, everything is going swell!

Now you want to actually use it!

So you head over to to the documentation for the Kubernetes specific auth-method.

Granted that there is some very important and useful information there, but it lacks any sort of guide on how to actually implement it.

So like any sensible person, you scour the web for a guide.

And then…

Defeated.

Well, I guess not actually since here you are.
Ah well, it was just a personal experience then.

Before we jump into creating our POC, let’s revisit Consul’s auth-method overview (Diagram1) and elaborate on it in the context of Kubernetes.

Architecture

In this guide, we will be creating a Consul server on a standalone machine which will communicate with a Kubernetes cluster with a Consul client installed. Then we will create our dummy app inside a pod and use our configured auth-method to read from our Consul key/value store.

The diagram below details the architecture we are creating in this guide as well as the auth-method logic which will be explained below.

Diagram 2: Kubernetes specific auth-method overview

Just a quick note: the Consul server doesn’t need to live outside of the Kubernetes cluster for this to work, but it can, and it is.

Alright so taking Consul’s overview diagram (Diagram 1) and applying Kubernetes to it, we get the above diagram (Diagram 2) and the logic is detailed as follows:

  1. Every pod will have a service account attached to it which contains a JWT token generated by and known to Kubernetes. This token is also inserted into the pod by default.
  2. Our app or service living inside the pod will initiate a Consul login command to our consul client. In the login request, it will also supply our token and specify the name of a specifically created auth-method (of type Kubernetes). This number 2 step maps to step 1 of Consul’s diagram (Diagram 1).
  3. Our Consul client will then forward this request to our Consul server.
  4. MAGIC! This is where the Consul server verifies the authenticity of the request, gathers details about the request’s identity, and compares them with any associated predefined rules. There will be another diagram below to illustrate this. This step maps to steps 3, 4, and 5 of Consul’s overview diagram (Diagram 1).
  5. Our Consul server will generate a Consul token with permissions according to the specified auth-method’s rules (that we defined) against the identity of the requestor. It will then send this token back. This maps to step 6 of Consul’s diagram (Diagram 1).
  6. Our Consul client forwards the token back to the requesting app or service.

Our app or service can now use this Consul token to communicate with our Consul data as defined by the token’s permissions.

Magic revealed!

For those of you that aren’t satisfied with just the rabbit out of the hat, and want to know how it works… let me “show you how deep the rabbit hole goes”.

As mentioned before, our “magic” step (Diagram 2: Step 4) is where the Consul server verifies the authenticity of the request, gathers details about the request’s identity, and compares them with any associated predefined rules. This step maps to steps 3, 4, and 5 of Consul’s overview diagram (Diagram 1).
Below is a diagram (Diagram 3) that aims to illustrate what is actually going on under the hood for the Kubernetes specific auth method.

Diagram 3: Magic revealed!
  1. Just as a reminder starting point, our Consul client forwards the login request to our Consul server with a: Kubernetes service account token and the specific name of an auth-method instance that was created prior. This step maps to step 3 in the previous diagram explanation.
  2. Now the Consul server (or leader) needs to validate the token it’s received as authentic. So it will consult with the Kubernetes cluster (through the Consul Client) and, given the proper permissions, we will find out if the token is authentic and to whom it belongs.
  3. The verified request’s identity is then returned to the Consul leader and the Consul server is searched for an auth method instance with the given name from the login request (and of type Kubernetes).
  4. The Consul leader locates the specified auth method instance (if found) and reads the set of binding rules that are attached to it. It then reads those binding rules and compares them with the verified identity attributes.
  5. Tada! Proceed to step 5 in the previous diagram explanation.

And now without further ado!

Start a Consul server on a standard VM

From here on I will mainly be giving instructions to create this POC, often in bullet point steps, without elaborative full sentences. Also as noted previously, I will be using GCP to create all the infrastructure but you can create the identical infra anywhere else.

  • Start a Virtual Machine (instance / server).
  • Create a firewall rule (security group in AWS):
  • I like to give the same name of the machine to the name of the rule and the name of the network tag, in this case, “skywiz-consul-server-poc”.
  • Find your local machine’s IP and add it to the source IP list so we can access the UI.
  • Open port 8500 for the UI. Click Create. We will modify this firewall again soon [reference].
  • Attach the firewall rule to the instance. Go back to the consul server VM dashboard and add “skywiz-consul-server-poc” to the network tags field. Click Save.
  • Install Consul on the VM, check here.
    Remember you need Consul version ≥ 1.5 [reference]
  • We will create a single node Consul — configuration as follows.
groupadd --system consul
useradd -s /sbin/nologin --system -g consul consul
mkdir -p /var/lib/consul
chown -R consul:consul /var/lib/consul
chmod -R 775 /var/lib/consul
mkdir /etc/consul.d
chown -R consul:consul /etc/consul.d
  • For a more in-depth guide for installing Consul and bootstrapping a 3 node cluster, see here.
  • Create a file /etc/consul.d/agent.json as follows [reference]:
### /etc/consul.d/agent.json
{
"acl" : {
"enabled": true,
"default_policy": "deny",
"enable_token_persistence": true
}
}
  • Start our Consul server:
consul agent \
-server \
-ui \
-client 0.0.0.0 \
-data-dir=/var/lib/consul \
-bootstrap-expect=1 \
-config-dir=/etc/consul.d
  • You should see a bunch of output and eventually “… update blocked by ACLs”.
  • Locate the public IP of the Consul server and open a browser to that IP at port 8500. See that the UI opens.
  • Try adding a key/value pair. It should give you an error. This is because we bootstrapped the Consul server with ACL and a deny all rule.
  • Go back to your shell of the Consul server and put the process in the background or some other means of keeping it running and enter the following:
consul acl bootstrap
  • Take the “SecretID” value and go back to the UI. Under the “ACL” tab, enter in the token’s secret id you just copied. Copy the SecretID somewhere, we will need it again later.
  • Now add a key/value pair. We will add the following for this POC:
    key: “custom-ns/test_key”
    value: “I’m in the custom-ns folder!”

Start a Kubernetes cluster for our app with Consul client as a Daemonset

  • Create a K8s (Kubernetes) cluster. We will create it in the same zone as the server for faster connections and so we can use the same subnet to easily connect with the internal IP addresses. We will name it “skywiz-app-with-consul-client-poc”.
  • As a side note, here’s a nice guide that I came across for setting up a POC consul cluster with Consul Connect.
  • We will also use Hashicorp’s helm chart with a watered-down values file.
  • Install and configure Helm.
    Config steps:
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-admin-binding \
--clusterrole=cluster-admin --serviceaccount=kube-system:tiller
./helm init --service-account=tiller
./helm update
### poc-helm-consul-values.yaml
global:
enabled: false
image: "consul:latest"
# Expose the Consul UI through this LoadBalancer
ui:
enabled: false
# Allow Consul to inject the Connect proxy into Kubernetes containers
connectInject:
enabled: false
# Configure a Consul client on Kubernetes nodes. GRPC listener is required for Connect.
client:
enabled: true
join: ["<PRIVATE_IP_CONSUL_SERVER>"]
extraConfig: |
{
"acl" : {
"enabled": true,
"default_policy": "deny",
"enable_token_persistence": true
}
}
# Minimal Consul configuration. Not suitable for production.
server:
enabled: false
# Sync Kubernetes and Consul services
syncCatalog:
enabled: false
  • Apply the helm chart:
./helm install -f poc-helm-consul-values.yaml ./consul-helm - name skywiz-app-with-consul-client-poc
  • While that’s trying to run, it will need permissions to the consul server, so let’s add those.
  • Take note of the “Pod address range” located on the cluster dashboard and return to our “skywiz-consul-server-poc” firewall rule.
  • Add the pod address range to the source IP ranges and open ports 8301 and 8300.
  • Navigate to the Consul UI and you should see after a few minutes our cluster appear under the nodes tab.

Configure an auth-method using Consul’s integration with Kubernetes

  • Go back to the Consul server shell and export the token you saved earlier:
    export CONSUL_HTTP_TOKEN=<SecretID>
  • We will need a few pieces of information from our Kubernetes cluster to create an instance of the auth-method:
  • kubernetes-host
kubectl get endpoints | grep kubernetes
  • kubernetes-service-account-jwt
kubectl get sa <helm_deployment_name>-consul-client -o yaml | grep "\- name:"kubectl get secret <secret_name_from_prev_command> -o yaml | grep token:
  • The token is base64 encoded so decode it with your favorite tool [reference]
  • kubernetes-ca-cert
kubectl get secret <secret_name_from_prev_command> -o yaml | grep ca.crt:
  • Take the “ca.crt” certificate (after base64 decoding) and write it to a file “ca.crt”.
  • Now create an auth-method instance, substituting the placeholders with the values you just retrieved.
consul acl auth-method create \
-type "kubernetes" \
-name "auth-method-skywiz-consul-poc" \
-description "This is an auth method using kubernetes for the cluster skywiz-app-with-consul-client-poc" \
-kubernetes-host "<k8s_endpoint_retrieved earlier>" \
-kubernetes-ca-cert=@ca.crt \
-kubernetes-service-account-jwt="<decoded_token_retrieved_earlier>"
  • Next, we want to create a policy and attach it to a new role. For this part, you could use the Consul UI, but we will use the command line.
  • Write the policy
### kv-custom-ns-policy.hcl 
key_prefix "custom-ns/" {
policy = "write"
}
  • Apply the policy
consul acl policy create \
-name kv-custom-ns-policy \
-description "This is an example policy for kv at custom-ns/" \
-rules @kv-custom-ns-policy.hcl
  • Find the id of the policy you just created from the response output.
  • Create a role with the new policy attached
consul acl role create \
-name "custom-ns-role" \
-description "This is an example role for custom-ns namespace" \
-policy-id <policy_id>
consul acl binding-rule create \
-method=auth-method-skywiz-consul-poc \
-bind-type=role \
-bind-name='custom-ns-role' \
-selector='serviceaccount.namespace=="custom-ns"'

Last-minute configurations

Permissions

  • We need to give permissions for Consul to verify and identify the identity of a K8s service account token.
  • Write to file the following [reference] :
###skywiz-poc-consul-server_rbac.yaml
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: review-tokens
namespace: default
subjects:
- kind: ServiceAccount
name: skywiz-app-with-consul-client-poc-consul-client
namespace: default
roleRef:
kind: ClusterRole
name: system:auth-delegator
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: service-account-getter
namespace: default
rules:
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: get-service-accounts
namespace: default
subjects:
- kind: ServiceAccount
name: skywiz-app-with-consul-client-poc-consul-client
namespace: default
roleRef:
kind: ClusterRole
name: service-account-getter
apiGroup: rbac.authorization.k8s.io
  • Create the permissions
kubectl create -f skywiz-poc-consul-server_rbac.yaml

Connecting to the Consul Client

  • There are a few options for connecting to a daemonset as noted here, but we will go with the following easy solution:
  • Apply the following service file [reference].
### poc-consul-client-ds-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: consul-ds-client
spec:
selector:
app: consul
chart: consul-helm
component: client
hasDNS: "true"
release: skywiz-app-with-consul-client-poc
ports:
- protocol: TCP
port: 80
targetPort: 8500
  • Then apply the following inline command to create a configmap [reference]. Notice that we refer to the name of our service, replace if necessary.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"consul": ["$(kubectl get svc consul-ds-client -o jsonpath='{.spec.clusterIP}')"]}
EOF

And we’re DONE!

Testing the auth-method

Now let’s see the magic in action!

  • Create some more key folders with the same top-level key (i.e. <new_folder>/sample_key) and a value of your choosing. Create corresponding policies and roles to the new key paths. We will make the bindings later.

The custom namespace test:

  • Create our custom namespace:
kubectl create namespace custom-ns
  • Spin up a generic pod in our new namespace. Write the following pod configuration file.
###poc-ubuntu-custom-ns.yaml
apiVersion: v1
kind: Pod
metadata:
name: poc-ubuntu-custom-ns
namespace: custom-ns
spec:
containers:
- name: poc-ubuntu-custom-ns
image: ubuntu
command: ["/bin/bash", "-ec", "sleep infinity"]
restartPolicy: Never
  • Create the pod:
kubectl create -f poc-ubuntu-custom-ns.yaml
  • Once the container is running, open a shell inside and install curl.
kubectl exec poc-ubuntu-custom-ns -n custom-ns -it /bin/bash
apt-get update && apt-get install curl -y
  • Now we will send a login request to Consul using the auth-method we created earlier [reference].
  • To view the injected token from your service account:
cat /run/secrets/kubernetes.io/serviceaccount/token
  • Write the following file inside the container:
### payload.json
{
"AuthMethod": "auth-method-test",
"BearerToken": "<jwt_token>"
}
  • Login!
curl \
--request POST \
--data @payload.json \
consul-ds-client.default.svc.cluster.local/v1/acl/login
  • To do the above steps in one line (since we will be doing multiple tests) you can do the following:
echo "{ \
\"AuthMethod\": \"auth-method-skywiz-consul-poc\", \
\"BearerToken\": \"$(cat /run/secrets/kubernetes.io/serviceaccount/token)\" \
}" \
| curl \
--request POST \
--data @- \
consul-ds-client.default.svc.cluster.local/v1/acl/login
  • Works! It should at least. Now take the SecretID from the response and try to access a key/value we should have permissions to now.
curl \
consul-ds-client.default.svc.cluster.local/v1/kv/custom-ns/test_key --header “X-Consul-Token: <SecretID_from_prev_response>”
  • You can decode the base64 “Value” and see that it matches the value at custom-ns/test_key in the UI.
    If you used the same value from above in this guide, your coded value will be IkknbSBpbiB0aGUgY3VzdG9tLW5zIGZvbGRlciEi.

The custom service account test:

  • Create a custom serviceaccount with the following command [reference].
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: custom-sa
EOF
  • Create a new generic pod configuration file. Notice I included the curl install to save some labor :)
###poc-ubuntu-custom-sa.yaml
apiVersion: v1
kind: Pod
metadata:
name: poc-ubuntu-custom-sa
namespace: default
spec:
serviceAccountName: custom-sa
containers:
- name: poc-ubuntu-custom-sa
image: ubuntu
command: ["/bin/bash","-ec"]
args: ["apt-get update && apt-get install curl -y; sleep infinity"]
restartPolicy: Never
  • Once up, start a shell inside the container
kubectl exec -it poc-ubuntu-custom-sa /bin/bash
  • Login!
echo "{ \
\"AuthMethod\": \"auth-method-skywiz-consul-poc\", \
\"BearerToken\": \"$(cat /run/secrets/kubernetes.io/serviceaccount/token)\" \
}" \
| curl \
--request POST \
--data @- \
consul-ds-client.default.svc.cluster.local/v1/acl/login
  • Permission denied.
    Oh, we forgot to add a new rule binding with proper permissions, let’s do that now.

Repeat previous steps waaayyy above:
- Create an identical Policy for “custom-sa/” key prefix.
- Create Role, call it “custom-sa-role”
- Attach the Policy to the Role.

  • Create the Rule-Binding (Only possible from the cli / api). Notice the different selector flag value.
consul acl binding-rule create \
-method=auth-method-skywiz-consul-poc \
-bind-type=role \
-bind-name='custom-sa-role' \
-selector='serviceaccount.name=="custom-sa"'
  • Retry the login from the “poc-ubuntu-custom-sa” container.
    Success!
  • Test our access to the “custom-sa/” key path.
curl \
consul-ds-client.default.svc.cluster.local/v1/kv/custom-sa/test_key --header “X-Consul-Token: <SecretID>”
  • You can also verify that this token does not give access to the kv under “custom-ns/”.
    Just repeat the above command after replacing “custom-sa” with “custom-ns” key prefix.
    Permission denied.

Overlap example:

  • One thing worth noting is that all rule-binding matches will be added to the token with those permissions.
  • Our “poc-ubuntu-custom-sa” container lives in the default namespace — so let's use that for another rule-binding.
  • Repeat previous steps:
    - Create an identical Policy for “default/” key prefix.
    - Create Role, call it “default-ns-role”
    - Attach Policy to Role.
  • Create Rule-Binding (Only possible from the cli / api)
consul acl binding-rule create \
-method=auth-method-skywiz-consul-poc \
-bind-type=role \
-bind-name='default-ns-role' \
-selector='serviceaccount.namespace=="default"'
  • Go back to our “poc-ubuntu-custom-sa” container and try to access the “default/” kv path.
  • Permission denied.
    You can view the given credentials to each token on the UI under ACL > Tokens. As you can see there is only the one“ custom-sa-role” attached to our current token.
    The token we are currently using was generated when we logged in and there was only one rule-binding that matched then. We need to login again and use the new token.
  • Verify that you can read from both the “custom-sa/” and “default/” kv paths.
    Success!
    This is because our “poc-ubuntu-custom-sa” matches both the “custom-sa” AND the “default-ns” rule bindings.

<Insert Celebratory Dance>

Conclusion

TTL token mgmt?

At the time of this writing, there is currently no integrated way to define a TTL on the tokens generated by this auth-method.

This would be a fantastic feature to really ensure a secure Consul authentication automation.

There does exist an option to manually create a token with a TTL:

Hopefully one day soon we will be able to control how tokens are generated (per rule-binding or auth-method) and add a TTL.

Until then, it is suggested that you use the logout endpoint in your logic.

Last Remarks

I hope this article was helpful to you. If you have any questions, comments, or suggestions; please feel free to leave a comment and I’ll be happy to respond :)

Oh and since you’re still here, leave a clap or two ;)

--

--