Streamlining Multi-Cloud Kubernetes Access Using OIDC

Naveen M
7 min readMay 28, 2024

--

As businesses increasingly adopt multi-cloud environments, the task of managing Kubernetes clusters across different cloud providers becomes crucial. In our setup, apart from handling on-premises clusters, we work extensively with Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Oracle Kubernetes Engine (OKE). Integrating such diverse platforms poses a complex challenge, especially when it comes to using a unified method for secure access. A particular aspect of this challenge is enabling consistent and secure access using kubectl, managed via OpenID Connect (OIDC).

Background: The OIDC Challenge in Kubernetes

Kubernetes natively supports user authentication through OIDC, which usually involves straightforward adjustments to the API server’s configuration. However, in a cloud environment, the API server settings are often managed and obscured by the service provider, complicating direct OIDC integration. While some providers like GKE and EKS offer built-in support for OIDC, they do so in different ways and to varying extents. Notably, during our last assessment, OKE did not have support for OIDC, illustrating the inconsistency across platforms.

Our exploration for a one-size-fits-all solution came up empty; no active projects seemed to address this need adequately across all the cloud vendors we work with.

Creating a Unified OIDC Solution

To address these discrepancies, our goal was to develop a solution that was not only universally applicable across any cloud provider but also easy to manage. We opted for a reverse proxy approach that standardizes the authentication process using Kubernetes’ user impersonation feature.

Please note that you have the option to deploy this setup within a single cluster scope, where an in-cluster service account token is utilized to interact with the API server. However, we opted for a centralized solution. This decision was primarily driven by our intricate compliance needs and the necessity for effortless installation across multiple clusters.

How Our Solution Works

The mechanism is straightforward yet effective:

  1. User Command Initiation: Users execute kubectl commands as usual.
  2. Command Processing via Reverse Proxy: These commands hit a reverse proxy, where user session tokens are verified.
  3. Impersonation and Request Forwarding: The proxy then injects impersonation details (based on the verified tokens) and forwards these modified requests to the target Kubernetes API server.

This method permits us to manage authentication uniformly whether the clusters are part of GKE, EKS, or even our on-premises setups, smoothing out integrations and operational workflows.

Core Components of Our Implementation

The architecture hinges on several key components:

High level architecture diagram

Reverse Proxy (Envoy):

This is the entry point for all kubectl commands, tasked with initial processing and routing of requests. Envoy is robust enough to handle various Kubernetes operations, including those that require special protocols like web-sockets for terminal access and port-forwarding.

Three critical configurations that form the backbone of our reverse proxy setup with Envoy. These are:

  1. Listener Configuration
  2. Cluster Configuration
  3. External Authorization Filter

1. Listener Configuration

The Listener configuration is crucial as it handles incoming requests. Below is an example of how we configure a listener for one of our clusters using Envoy

- name: listener_2
address:
socket_address: { address: 0.0.0.0, port_value: 8443 }
per_connection_buffer_limit_bytes: 2097152
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
generate_request_id: true
upgrade_configs:
- upgrade_type: "websocket"
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/v1/clusters/cluster1"
headers: [ { name: "upgrade" }, { name: "connection",exact_match: "upgrade" } ]
route:
upgrade_configs: [ { upgrade_type: "SPDY/3.1" }, { upgrade_type: "websocket" } ]
cluster: service_k8s_upgrade_cluster1
timeout: "0s"
prefix_rewrite: "/"
- match: {prefix: "/v1/clusters/cluster1"}
route:
cluster: service_k8s_cluster1
timeout: "0s"
prefix_rewrite: "/

2. Cluster Configuration

The Cluster configuration deals with the settings for the backend cluster API servers. Here’s an example of a cluster configuration for one of our kubernetes clusters:

clusters:
- name: service_k8s_upgrade_cluster1
connect_timeout: 3s
#http2_protocol_options: {}
type: STRICT_DNS
# Comment out the following line to test on v6 networks
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service_k8s_upgrade_cluster1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: cluster1-api-server.foo.com
port_value: 443
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
alpn_protocols:
- http/1.1
validation_context:
trusted_ca:
filename: /etc/certs/cluster1
- name: service_k8s_cluster1
connect_timeout: 3s
type: STRICT_DNS
# Comment out the following line to test on v6 networks
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service_k8s_cluster1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: cluster1-api-server.foo.com
port_value: 443
http2_protocol_options: { hpack_table_size: 65536 }
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
alpn_protocols:
- h2
- http/1.1
validation_context:
trusted_ca:
filename: /etc/certs/cluster1

In this configuration, we define how Envoy should connect to the backend Kubernetes API server, specifying details such as connection timeout, load balancing policy, and TLS settings.

3. External Authorization Filter

Lastly, the External Authorization Filter (ext_authz) involves setting up an authorization pathway for the traffic, ensuring secure and validated access. Below is an example of how we configure the ext_authz in Envoy:

http_filters:
- name: envoy.filters.http.ext_authz
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
transport_api_version: V3
http_service:
server_uri:
uri: http://kube-oidc-central-proxy-authz-server:8200/
cluster: ext-authz-http
timeout: 0s
authorization_request:
allowed_headers:
patterns:
- exact: "authorization"
- exact: "content-type"
authorization_response:
allowed_upstream_headers:
patterns:
- prefix: "authorization"
ignore_case: true
allowed_upstream_headers_to_append:
patterns:
- prefix: "impersonate-"
ignore_case: true

// ext-authz-http cluster config could be like
clusters:
- name: ext-authz-http
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: ext-authz-http
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: kube-oidc-central-proxy-authz-server
port_value: 8200

This configuration directs Envoy to use an external HTTP service for authorization decisions, specifying both the service URI and the behavior for handling authorization headers.

External Authorization (envoy ext_authz):

While there is substantial documentation on configuring ext_authz in Envoy, a critical aspect to remember is the inclusion of Impersonate headers, as mapped out in the Kubernetes Authentication documentation.

In scenarios where a user belongs to multiple groups, multiple Impersonate-Group headers need to be dynamically handled. This feature is crucial for catering to users who have varying access levels across different Kubernetes resources based on their group memberships. The example below illustrates how this might look:

Impersonate-User: jane.doe@example.com
Impersonate-Group: developers
Impersonate-Group: admins

The default behavior for the ext_authz gRPC service in Envoy is to overwrite and pass only one Impersonate-Group to the backend cluster. However, when leveraging the ext_authz HTTP service, we unlock the capability to append multiple Impersonate-Group headers. This behavior is essential for accurately representing the user's multiple group memberships in their requests to the Kubernetes API.

Below is an example configuration for the ext_authz HTTP service in Envoy, which shows how it can handle these scenarios:

func (s *Server) AuthZHttpServer(response http.ResponseWriter, request *http.Request) {
ctx := request.Context()
log := logger.NewEntry(ctx)

var authToken string
authHeader := request.Header.Get("authorization")
var splitToken []string

if authHeader != "" {
splitToken = strings.Split(authHeader, "Bearer ")
}

if len(splitToken) == 2 {
authToken = splitToken[1]
}

if len(authToken) == 0 {
log.Errorln("Authorization Header malformed or not provided")
response.WriteHeader(http.StatusUnauthorized)
_, _ = response.Write([]byte("Authorization Header malformed or not provided"))
return
}

// 1. Verify the token using go-oidc, and get the ID token claims
idToken, err := s.IDTokenVerifier.Verify(ctx, authToken)
if err != nil {
log.Errorf("failed to verify idToken due to error: %v ", err)
response.WriteHeader(http.StatusUnauthorized)
_, _ = response.Write([]byte("PERMISSION_DENIED"))
return
}
claims := &TokenClaims{}
if err := idToken.Claims(claims); err != nil {
log.Errorf("failed to get token claims due to error: %v ", err)
_, _ = response.Write([]byte("PERMISSION_DENIED"))
response.WriteHeader(http.StatusUnauthorized)

}

//. Add the impersonated user email
response.Header().Add("impersonate-user", claims.Email)

// Add the impersonated groups headers

for _, group := range claims.Groups {
response.Header().Add("impersonate-group", group)
}

// Pick up the cluster service account token based on the cluster name
pattern := regexp.MustCompile(`^/v1/clusters/([^/]+)`)

matches := pattern.FindStringSubmatch(request.RequestURI)
clusterName := ""
clusterToken := ""
if len(matches) == 2 {
clusterName = matches[1]
}

ct, ok := config.ThreadSafeClusterTokenMap.Load(clusterName)
if ok {
clusterToken = ct.(string)
}
// overwrite user authorization header with cluster service account token
response.Header().Set("authorization", fmt.Sprintf("Bearer %s", clusterToken))
response.WriteHeader(http.StatusOK)
}

Configured Service Accounts and Cluster Settings:

A careful examination of the above Envoy cluster configuration reveals the necessity to set up SSL configurations for each cluster. Furthermore, based on the above logic from the external authorization HTTP service, it’s essential that we facilitate the passing of cluster service account tokens. These tokens are used to overwrite user session tokens, ensuring that each request to the Kubernetes API is authenticated and authorized correctly under the right service account.

When configuring your cluster settings, the key elements include:

  • Cluster Name
  • Bearer Token
  • CA Cert Config

The configuration might be done manually or through automated systems, depending on your organization’s operational and security needs.

Kubeconfig Generation:

When using Envoy as a proxy, it’s crucial to configure the kubeconfig properly so that users communicate through the Envoy service rather than directly with the Kubernetes API server. This requires including specific configurations such as the SSL CA certificate of the Envoy service and the precise Envoy endpoint which includes the path specific to the cluster.

Example of a Configured Kubeconfig

apiVersion: v1
kind: Config
clusters:
- name: cluster1
cluster:
certificate-authority-data: [Base64-encoded-CA-certificate of Envoy endpoint]
server: https://envoy-service-endpoint/v1/clusters/cluster1
contexts:
- name: cluster1-context
context:
cluster: cluster1
user: oidc
current-context: cluster1-context
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://oidc-endpoint
- --oidc-client-id=kubernetes
command: kubectl
env: null
interactiveMode: IfAvailable
provideClusterInfo: false

In this configuration:

  • The certificate-authority-data is the base64-encoded CA certificate of the Envoy proxy.
  • The server directive points to the Envoy service, appending the specific path to target the right cluster.

By setting up the kubeconfig in this manner, all kubectl commands issued by the user are securely routed through the Envoy proxy, maintaining consistent access controls and authentication managed via OIDC.

Additional Technical Enhancements

While the core of our solution is designed to be as simple and maintainable as possible, we’ve implemented several advanced features to address specific needs:

  • Dynamic Configuration Loading: Cluster configurations, including credentials and certificates, are automatically loaded into Envoy based on cluster lifecycle events using Envoy xDS.
  • Secrets Management via HashiCorp Vault: We frequently rotate Kubernetes service account tokens to enhance security and store in Vault.

Conclusion: Achieving Seamless Multi-Cloud Kubernetes Access

This solution represents our approach to overcoming the complexities of managing secure, consistent Kubernetes access across a multi-cloud environment. By centralizing OIDC authentication through a reverse proxy setup, we streamline both user experience and backend management, ensuring a scalable and secure infrastructure.

If any specific areas require clarification, or if you spot potential improvements, please don’t hesitate to reach out or comment. This discussion enriches our collective understanding and leads to more robust solutions.

--

--

Responses (1)