Vault Agent with GKE
Vault Agent behaves as a client-side daemon to make requests to Vault on behalf of the client application. This includes the authentication to Vault.
Vault clients (human users, applications, etc.) must authenticate with Vault and get a client token to make API requests. Because tokens have time-to-live (TTL), the clients must renew the token’s TTL or re-authenticate to Vault based on its TTL. Vault Agent authenticates with Vault and manage the token’s lifecycle so that the client application doesn’t have to. This eliminates the need to change your application code to invoke the Vault API. Your existing applications can remain Vault-unaware.
In addition, you can inject the Consul Template markup into Vault Agent so that secrets can be rendered to files where the client application load data from.
Vault Agent Auto-Auth
Auto-Auth consists of two parts: a Method, which is the authentication method that should be used in the current environment; and any number of Sinks, which are locations where the agent should write a token any time the current token value has changed.
When the agent is started with Auto-Auth enabled, it will attempt to acquire a Vault token using the configured Method. On failure, it will exponentially back off and then retry. On success, unless the auth method is configured to wrap the tokens, it will keep the resulting token renewed until renewal is no longer allowed or fails, at which point it will attempt to reauthenticate.
Every time an authentication is successful, the token is written to the configured Sinks, subject to their configuration.
Sinks support some advanced features, including the ability for the written values to be encrypted or response-wrapped.
Both mechanisms can be used concurrently; in this case, the value will be response-wrapped, then encrypted.
Response-Wrapping Tokens
There are two ways that tokens can be response-wrapped by the agent:
- By the auth method. This allows the end client to introspect the
creation_pathof the token, helping prevent Man-In-The-Middle (MITM) attacks. However, because the agent cannot then unwrap the token and rewrap it without modifying thecreation_path, the agent is not able to renew the token; it is up to the end client to renew the token. The agent stays daemonized in this mode since some auth methods allow for reauthentication on certain events. - By any of the token sinks. Because more than one sink can be configured, the token must be wrapped after it is fetched, rather than wrapped by Vault as it’s being returned. As a result, the
creation_pathwill always besys/wrapping/wrap, and validation of this field cannot be used as protection against MITM attacks. However, this mode allows the agent to keep the token renewed for the end client and automatically reauthenticate when it expires.
Vault Agent with Kubernetes(GKE)
Nearly all requests to Vault must be accompanied by an authentication token. If you can securely get the first secret from an originator to a consumer, all subsequent secrets transmitted between this originator and consumer can be authenticated with the trust established by the successful distribution and user of that first secret. The applications running in a Kubernetes environment is no exception. Luckily, Vault provides Kubernetes auth method to authenticate the clients using a Kubernetes Service Account Token.
However, the client is still responsible for managing the lifecycle of its Vault tokens. Therefore, the next challenge becomes how to manage the lifecycle of tokens in a standard way without having to write custom logic.
Vault Agent provides a number of different helper features, specifically addressing the following challenges:
- Automatic authentication
- Secure delivery/storage of tokens
- Lifecycle management of these tokens (renewal & re-authentication)
- Set your working directory to where the
learn-vault-agent/vault-agent-k8s-demo/terraform-gcpfolder is located. - Modify
terraform.tfvars.exampleand provide GCP credentials:account_file_pathandproject, and save it asterraform.tfvars.
terraform.tfvars
account_file_path = "/usr/student/gcp/vault-test-project.json"
project = "vault-test-project"3. Execute the Terraform commands to provision a new GKE cluster. Pull necessary plugins.
$ terraform initNow, execute the apply command to build a new GKE cluster.
$ terraform apply -auto-approve4. Connect to the GKE cluster.
$ gcloud container clusters get-credentials $(terraform output gcp_cluster_name) \ --zone $(terraform output gcp_zone) \ --project $(terraform output gcp_project)5. Get the cluster information.
$ kubectl cluster-info
Kubernetes master is running at https://198.51.100.24
GLBCDefaultBackend is running at https://198.51.100.24/api/v1/namespaces/... ...6. Copy the Kubernetes master address.
7. In the /vault-agent-k8s-demo/setup-k8s-auth.sh file, replace Line 48 to point to the GKE cluster address rather than export K8S_HOST=$(minikube ip). Also, replace Line 54 to point to the correct host address.
Example:
...
# Set K8S_HOST to minikube IP address
# export K8S_HOST=$(minikube ip)
export K8S_HOST="https://198.51.100.24"
# Enable the Kubernetes auth method at the default path ("auth/kubernetes")
vault auth enable kubernetes
# Tell Vault how to communicate with the Kubernetes (Minikube) cluster
# vault write auth/kubernetes/config token_reviewer_jwt="$SA_JWT_TOKEN" kubernetes_host="https://$K8S_HOST:8443" kubernetes_ca_cert="$SA_CA_CRT"
vault write auth/kubernetes/config token_reviewer_jwt="$SA_JWT_TOKEN" kubernetes_host="$K8S_HOST" kubernetes_ca_cert="$SA_CA_CRT"
...8. Set the working directory to where scripts are located (learn-vault-agent/vault-agent-k8s-demo).
$ cd ..9. Create a service account, vault-auth.
$ kubectl create serviceaccount vault-auth10. Update the vault-auth service account with definition provided in the vault-auth-service-account.yaml file.
$ kubectl apply --filename vault-auth-service-account.yaml11. Setup the Kubernetes auth method on the Vault server.
$ ./setup-k8s-auth.shDetermine the Vault address
A service bound to all networks on the host, as you configured Vault, is addressable by pods within Minikube’s cluster by sending requests to the gateway address of the Kubernetes cluster.
- Start a minikube SSH session.
$ minikube ssh ## ... minikube ssh login2. Within this SSH session, retrieve the value of the Minikube host.
$ dig +short host.docker.internal 192.168.65.23. Next, retrieve the status of the Vault server to verify network connectivity.
$ dig +short host.docker.internal | xargs -I{} curl -s http://{}:8200/v1/sys/seal-status
{ "type": "shamir", "initialized": true, "sealed": false, "t": 1, "n": 1, "progress": 0, "nonce": "", "version": "1.4.1+ent", "migration": false, "cluster_name": "vault-cluster-3de6c2d3", "cluster_id": "10fd177e-d55a-d740-0c54-26268ed86e31", "recovery_seal": false, "storage_type": "inmem" }The output displays that Vault is initialized and unsealed. This confirms that pods within your cluster are able to reach Vault given that each pod is configured to use the gateway address.
4. Next, exit the Minikube SSH session.
$ exit5. Finally, create a variable named EXTERNAL_VAULT_ADDR to capture the Minikube gateway address.
$ EXTERNAL_VAULT_ADDR=$(minikube ssh "dig +short host.docker.internal" | tr -d '\r')6. Verify that the variable contains the IP address you saw when executed in the Minikube shell.
$ echo $EXTERNAL_VAULT_ADDRStart Vault Agent with Auto-Auth
Now that you have verified that the Kubernetes auth method has been configured on the Vault server, it is time to spin up a client Pod which leverages Vault Agent to automatically authenticate with Vault and retrieve a client token.
- First, open the provided
configmap.yamlfile in your preferred text editor and review its content.
configmap.yaml
apiVersion: v1
data:
vault-agent-config.hcl: |
# Comment this out if running as sidecar instead of initContainer
exit_after_auth = true
pid_file = "/home/vault/pidfile"
auto_auth {
method "kubernetes" {
mount_path = "auth/kubernetes"
config = {
role = "example"
}
}
sink "file" {
config = {
path = "/home/vault/.vault-token"
}
}
}
template {
destination = "/etc/secrets/index.html"
contents = <<EOT
<html>
<body>
<p>Some secrets:</p>
{{- with secret "secret/data/myapp/config" }}
<ul>
<li><pre>username: {{ .Data.data.username }}</pre></li>
<li><pre>password: {{ .Data.data.password }}</pre></li>
</ul>
{{ end }}
</body>
</html>
EOT
}
kind: ConfigMap
metadata:
name: example-vault-agent-config
namespace: defaultThis creates a Vault Agent configuration file, vault-agent-config.hcl. Notice that the Vault Agent Auto-Auth (auto_auth block) is configured to use the kubernetes auth method enabled at the auth/kubernetes path on the Vault server. The Vault Agent will use the example role which you created in Step 2.
The sink block specifies the location on disk where to write tokens. Vault Agent Auto-Auth sink can be configured multiple times if you want Vault Agent to place the token into multiple locations.
Finally, the template block creates a templated file which retrieves username and password values at the secret/data/myapp/config path.
2. Create a ConfigMap containing a Vault Agent configuration.
$ kubectl create --filename configmap.yaml configmap/example-vault-agent-config created3. View the created ConfigMap.
$ kubectl get configmap example-vault-agent-config --output yaml4. An example Pod spec file is provided. Review the provided example Pod spec file, example-k8s-spec.yaml.
example-k8s-spec.yaml
apiVersion: v1
kind: Pod
metadata:
name: vault-agent-example
namespace: default
spec:
serviceAccountName: vault-auth
volumes:
- configMap:
items:
- key: vault-agent-config.hcl
path: vault-agent-config.hcl
name: example-vault-agent-config
name: config
- emptyDir: {}
name: shared-data
initContainers:
- args:
- agent
- -config=/etc/vault/vault-agent-config.hcl
- -log-level=debug
env:
- name: VAULT_ADDR
value: http://EXTERNAL_VAULT_ADDR:8200
image: vault
name: vault-agent
volumeMounts:
- mountPath: /etc/vault
name: config
- mountPath: /etc/secrets
name: shared-data
containers:
- image: nginx
name: nginx-container
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: shared-dataThe example Pod spec (example-k8s-spec.yaml) spins up two containers in vault-agent-example pod. A vault container which runs Vault Agent as an Init Container. And an nginx container exposing port 80.
The Vault address, VAULT_ADDR, is set to a placeholder value EXTERNAL_VAULT_ADDR.
5. Generate the Pod spec with EXTERNAL_VAULT_ADDR variable value in its place.
$ cat example-k8s-spec.yaml | \ sed -e s/"EXTERNAL_VAULT_ADDR"/"$EXTERNAL_VAULT_ADDR"/ \ > vault-agent-example.yaml6. Create the vault-agent-example pod defined in vault-agent-example.yaml.
$ kubectl apply --filename vault-agent-example.yaml --recordThis takes a minute or so for the pod to become fully up and running.
Verification
- In another terminal, launch the Minikube dashboard.
$ minikube dashboard2. Click Pods under Workloads to verify that vault-agent-example pod has been created successfully.
3. Select vault-agent-example to see its details.
4. In another terminal, port forward all requests made to http://localhost:8080 to port 80 on the vault-agent-example pod.
$ kubectl port-forward pod/vault-agent-example 8080:805. In a web browser, go to localhost:8080
Notice that the username and password values were successfully read from secret/myapp/config.
6. Optionally, you can view the HTML source.
kubectl exec -it vault-agent-example --container nginx-container -- cat /usr/share/nginx/html/index.html
<html>
<body>
<p>Some secrets:</p>
<ul>
<li><pre>username: appuser</pre></li>
<li><pre>password: suP3rsec(et!</pre></li>
</ul>
</body>
</html>
