Backstage on GKE, Cloud Run, and Cloud SQL
What is Backstage and why do you need an IDP ?
Backstage is an Internal Developer Platform (IDP). It provides developers with a single interface for accessing and managing all of the self-service tools and technologies they need to build and deploy software. An IDP can help unify and integrate all your Golden Paths.
Backstage was created at Spotify 🎵, and the project has recently joined the CNCF Incubator.
Read how Spotify achieved a voluntary 99% internal platform adoption rate.
Backstage is typically managed by a platform team. This article aims to guide a platform engineer through installing Backstage on Google Cloud.
Step 1. Download Backstage and create your container image
A) On your local machine, install the prerequisites such as Node.js, yarn, docker, etc. See the full list here: https://backstage.io/docs/getting-started/
B) Then, you should be able to install and run the app with the following commands:
npx @backstage/create-app@latest
cd backstage
yarn dev
C) Before creating our container image, let’s make sure to edit the files app-config.yaml
and app-config.production.yaml
Use the following snippet for the database configuration:
database:
client: pg
connection:
host: localhost
port: 5432
user: ${POSTGRES_USER}
password: ${POSTGRES_PASSWORD}
D) Build and push your container image. This will upload your image to Artifact Registry, so make sure that’s enabled in your project. Also, make sure to edit your ~/.docker/config
as instructed here.
Replace <gcp-project-id>
with your GCP Project ID in the following commands:
yarn build-image --tag backstage:1.0.0
docker tag backstage gcr.io/<gcp-project-id>/backstage:1.0.0
docker push gcr.io/<gcp-project-id>/backstage:1.0.0
One problem I encountered here was that the Dockerfile had the wrong Node.js version. The Node.js version installed on my local machine was
18
. So, I edited the file atbackstage/packages/backend/Dockerfile
and made sure to importFROM node:18
If you encounter problems, there might be updated instructions here: https://backstage.io/docs/deployment/docker
Our container image is now uploaded in Artifact Registry.
Step 2. Setting up Cloud SQL
Navigate to https://console.cloud.google.com and setup a new Cloud SQL instance. Backstage works with PostgreSQL, so choose that one. For my test, I used PostgreSQL 15, Enterprise edition, us-east1
as a region, single zone, 2 vCPU. But you are of course free to tweak these depending on your needs.
Create a password for your postgres
user that you will remember. In the following steps we will use Instance ID pg-backstage
for this database.
Step 3. Creating a Service Account
Let’s now create a service account that will be used by our app to access this database and also download our container image from Artifact Registry.
Service account ID:
sa-backstage@<gcp-project>.iam.gserviceaccount.com
Roles assigned:
Artifact Registry Reader
Cloud SQL Editor
Step 4. Setting up GKE (Google Kubernetes Engine)
A) Navigate to https://console.cloud.google.com and create a GKE cluster. You can decide to use Autopilot or not, I didn’t. You can name your cluster cluster-backstage
, zonal cluster pointed to us-east1-c
, 3 nodes with e2-standard-2
as machine type. You are of course free to tweak these depending on your needs, just don’t use the Shared-core machines or you will run into errors when deploying Backstage.
B) Create a namespace for our workload. From Cloud Shell, configure kubectl, then you can type:
kubectl create namespace backstage
C) We will configure Backstage to use Workload Identity. Let’s first create a Kubernetes service account:
kubectl create serviceaccount ksa-backstage --namespace backstage
Then, enable the IAM binding between the service account created at Step 3 above, and your kubernetes service account. Replace <gcp-project-id>
with your GCP Project ID in the following command:
gcloud iam service-accounts add-iam-policy-binding \
sa-backstage@<gcp-project-id>.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:<gcp-project-id>.svc.id.goog[backstage/ksa-backstage]"
Finally, annotate the Kubernetes service account with the email address of the IAM service account. Replace <gcp-project-id>
with your GCP Project ID in the following command:
kubectl annotate serviceaccount ksa-backstage \
--namespace backstage \
iam.gke.io/gcp-service-account=sa-backstage@<gcp-project-id>.iam.gserviceaccount.com
You can check this page in case some commands don’t work: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
D) If we want Backstage to be able to connect to our database, we need to create a Kubernetes secret.
In Cloud Shell, create a new file postgres-secrets.yaml
with the following content. Replace <user>
and <password>
with the base64-encoded of the user and password used in Step 2 above.
# kubernetes/postgres-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
namespace: backstage
type: Opaque
data:
POSTGRES_USER: <user>
POSTGRES_PASSWORD: <password>
To get the base64-encoded value:
echo -n "postgres" | base64
cG9zdGdyZXM=
Apply the yaml
file to the Kubernetes cluster:
kubectl apply -f postgres-secrets.yaml --namespace backstage
Step 5. Deployment in GKE
Now that our GKE cluster is configured, we are ready to deploy the Backstage workload.
In Cloud Shell, create the file backstage-deployment.yaml
and copy/paste the block of text below. Don’t forget to replace <gcp-project-id>
and <cloud-sql-region>
in the text. In Step 2 above, we created our Cloud SQL database in us-east1
so you can use that as <cloud-sql-region>
if you picked the same.
# kubernetes/backstage-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
serviceAccountName: ksa-backstage
containers:
- name: backstage
image: gcr.io/<gcp-project-id>/backstage:1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.6.1
args:
- "--structured-logs"
- "--port=5432"
- "<gcp-project-id>:<cloud-sql-region>:pg-backstage"
securityContext:
runAsNonRoot: true
resources:
requests:
memory: "2Gi"
cpu: "1"
As you can see from the deployment file above:
- we are running the Cloud SQL Auth Proxy in a
sidecar
pattern. You can read more about this here. - we set the service account
ksa-backstage
that we created in Step 4 above.
Then, we are ready to apply the deployment:
kubectl apply -f kubernetes/backstage-deployment.yaml
Finally, you can create the Backstage Kubernetes service and even an Ingress resource (Load Balancer) if you want to provide internal and/or external access to this Backstage installation.
Step 6. (optional) Using Cloud Run instead of GKE
GKE provides a good level of abstraction if you still want to configure and tweak your own Kubernetes cluster, for example: changing machine types, etc. But if instead you are looking for a more Serverless solution, you should try running Backstage on Cloud Run.
Before we get started with Cloud Run, let’s first save our Cloud SQL user and password created in Step 2 above in Secret Manager, another Google Cloud managed service. It has the advantage to be well integrated with Cloud Run and so, we can easily inject them as environment variables later on.
Let’s now navigate to Cloud Run in the Google Cloud console, and create a new backstage
service. Point to the gcr.io
container image uploaded in Step 1 above.
While creating the Cloud Run service, make sure to select the region, here we took us-east1
and change a few other configuration items:
- Container port:
7007
- Cloud SQL connection: select your PostgreSQL instance in the list
- Security & service account: select
sa-backstage
- Inject
POSTGRES_USER
andPOSTGRES_PASSWORD
as environment variables from Secret Manager
Once the Cloud Run service is created, we need to do one last thing. Cloud Run now supports sidecar deployments. And because in Step 1 above, we point our Backstage config app-config.yaml
to localhost
for the database, we will deploy Cloud SQL Auth Proxy in a sidecar pattern. This is exactly the same as what we did on GKE in Step 5 above, but this time in Cloud Run.
In Cloud Shell, create a new file backstage-cloudrun.yaml
and copy/paste the block of text below.
Don’t forget to edit <gcp-project-id>
and <cloud-sql-region>
with your values.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
annotations:
run.googleapis.com/launch-stage: BETA
name: backstage
spec:
template:
metadata:
annotations:
run.googleapis.com/execution-environment: gen1 #or gen2
spec:
containers:
- image: gcr.io/<gcp-project-id>/backstage:1.0.0
ports:
- containerPort: 7007
- image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.6.1
args:
- "--structured-logs"
- "--port=5432"
- "<gcp-project-id>:<cloud-sql-region>:pg-backstage"
Still in Cloud Shell, update the Cloud Run service using this command:
gcloud run services replace backstage-cloudrun.yaml
And voila! Our Cloud Run service is running. You should now be able to access Backstage via the Cloud Run URL provided.
Conclusion
Installing and configuring Backstage in 2023 can be challenging. The project is still in its early days (incubating). There are many configuration options and often changing one setting requires a full rebuild of the container image. The project is trying to address some of these challenges by making the deployment easier with a new deploy
command (in alpha), but it only supports AWS so far.
I decided to write this guide, to help you install Backstage on Google Cloud! Using managed services such as Cloud Run and Cloud SQL will remove some of the tooling and infrastructure management burden. For example, Cloud Run automatically scales up or down depending on traffic.
If your organization is looking to embrace Platform Engineering, then Backstage is a must-have. It is a powerful tool that can help organizations improve developer productivity and collaboration.