Running Cloud SQL Auth Proxy on GKE: Deployment vs Jobs

Pawan Phalak
Google Cloud - Community
3 min readJun 24, 2023

The Cloud SQL Auth proxy is a Cloud SQL connector which can be used to connect to a Cloud SQL instance from a GKE cluster. More details about the working of Cloud SQL Auth proxy can be found here.

The GKE cluster can have both kubernetes deployment and jobs running the application workload which requires to establish a secure connection using cloud sql auth proxy.

It is recommended to run the cloud sql auth proxy in a side car pattern, primarily to avoid exposing the SQL traffic locally within the cluster. More details on running the cloud sql auth proxy in sidecar pattern can be found here.

The kubernetes jobs or deployment should be configured with workload identity to allow connecting to Cloud SQL with the IAM User acting as a database user. More details to configure workload identity for the service account attached to application workload(Deployment/Jobs) can be found here.

Cloud SQL Auth proxy sidecar configurations for Kubernetes Deployment

Following is a sample sidecar configuration to run a cloud sql auth proxy with kubernetes deployment:

- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL Auth Proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.1.0
args:
# If connecting from a VPC-native GKE cluster, you can use the
# following flag to have the proxy connect over private IP
# - "--private-ip"

# Enable structured logging with LogEntry format:
- "--structured-logs"

# Replace DB_PORT with the port the proxy should listen on
- "--port=<DB_PORT>"
- "<INSTANCE_CONNECTION_NAME>"

securityContext:
# The default Cloud SQL Auth Proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
# You should use resource requests/limits as a best practice to prevent
# pods from consuming too many resources and affecting the execution of
# other pods. You should adjust the following values based on what your
# application needs. For details, see
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources:
requests:
# The proxy's memory use scales linearly with the number of active
# connections. Fewer open connections will use less memory. Adjust
# this value based on your application's requirements.
memory: "2Gi"
# The proxy's CPU use scales linearly with the amount of IO between
# the database and the application. Adjust this value based on your
# application's requirements.
cpu: "1"

Cloud SQL Auth proxy sidecar configurations for Kubernetes Job

The kubernetes job requires additional configurations to work with cloud sql auth proxy in side car pattern. Since the side car should also exit once the main container is completed for the kubernetes job. We can add a custom configuration to exit the sidecar container once the main container exits. This is done by mounting a emptyDir volume for the job and the main container creates a file after completing the main container, this change is read by the sidecar container to then exit on the main container completion. Add following configuration to add emptyDir volume to the job:

volumes:
- name: tmp-pod
emptyDir: {}

Following configuration should be added in main container to write to the file in mounted volume after the main container completes:

args:
- |
<MAIN_CONTAINER_ARGS>
trap "touch /tmp/pod/terminated" EXIT
volumeMounts:
- mountPath: /tmp/pod
name: tmp-pod

Following configuration is for the cloud sql auth proxy which also read the mounted volume path for the file which gets updated after the main container exists. The side car container exits after detecting the file written by main container and the kubernetes job successfully completes with both main and sidecar container exiting.

- name: cloud-sql-proxy
image: "gcr.io/cloudsql-docker/gce-proxy:1.33.8-buster"
command: ["/bin/sh", "-c"]
args:
- |
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:5432 -structured_logs -enable_iam_login & CHILD_PID=$!
(while true; do if [[ -f "/tmp/pod/terminated" ]]; then kill $CHILD_PID; echo "Killed $CHILD_PID because the main container terminated."; fi; sleep 1; done) &
wait $CHILD_PID
if [[ -f "/tmp/pod/terminated" ]]; then exit 0; echo "Job completed. Exiting..."; fi
volumeMounts:
- mountPath: /tmp/pod
name: tmp-pod
readOnly: true

The above configuration allows connecting to cloud sql from both kubernetes job and deployment securely using the cloud sql auth proxy in a sidecar pattern.

--

--