A Simplified Guide to using BackStage to Manage your Existing Terraform

Paul Pogonoski
10 min readJan 10, 2024

I’ve just spend the last week trying to understand the OS BackStage IDP (Internal Developer Portal) and to use it’s support for managing IaC, as opposed to Engineering components.
This article is documenting my findings and, hopefully, showing you a simple way to use it — as opposed to what the documentation, or other articles suggest.

What is an IDP?

Firstly, I will spend a small amount of time defining and Internal Developer Portal versus an Internal Developer Platform.

Instead of using my own definition, and adding to the already too large pile of definitions, I like the definitions quoted here :https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer-portal-vs-paas#what-is-an-internal-developer-portal:

TLDR, an IDP is the platform layer for the enterprise that is built and shipped as a product by a dedicated platform engineering team, in order to remove complexity (without removing context) and enable developer self-service. An IDP effectively enables true DevOps, true “you build it, you run it”, at an enterprise scale and for complex cloud native setups.

General ID Platform Design

Three things happened over the last year that contributed to the general nomenclature confusion in the platform engineering space:

  1. A bunch of new portal vendors emerged hoping to capitalize on the success of Backstage, by offering a closed-source, easier-to-get-started version of the open source portal.
  2. Platform engineering and IDPs went mainstream, capturing a lot of headlines, KubeCon booth graphics, etc.
  3. Gartner published the following definition (ironically trying to clarify things): “Internal developer portals serve as the interface through which developers can discover and access internal developer platform capabilities.” Source: “A Software Engineering Leader’s Guide to Improving Developer Experience” by Manjunath Bhat, Research VP, Software Engineering Practice at Gartner. https://www.gartner.com/document/4017457

All this led to most new portal vendors seizing the opportunity (understandably) to both ride the IDP hype and leverage the Gartner definition. They started pushing the term IDP as an abbreviation for Internal Developer Portal instead of Internal Developer Platform. This is clearly far from ideal, especially considering portals are an important part of the overall platform engineering space, being one of the main interfaces INTO an IDP (see Gartner definition).

The reference architectures presented by McKinsey reconfirmed this, with service catalogs and portals sitting in the Developer Control Plane.

Typical features of portals are:

  • Service catalog functionality, including metadata on e.g. service ownership
  • Scaffolding and service templating functionality
  • Scorecards (app health, security status, etc.)

Main portal players: Backstage OSS but also commercial alternatives (Port, Cortex, Compass)

Again, it’s important to highlight that portals can be a useful tool in your platform team’s toolbox and provide a great interface for your developers into your IDP. But they are an interface, they are not your IDP.

Why did I want to use the Backstage ID Portal?

So what was my goal? It was to see how far I could use the Backstage ID Portal to give me an MVP ID Platform.

TLDR; If all you wanted to do was to integrate your existing IaC that managed a Kubernetes Cluster and the deployment of Microservices into it, then Backstage comes comfortably close to doing so, because of the existing plugins available — after you have read all of the documentation, assimilated it, and worked out what was superseded. Which is not an easy task.
However, if you want to completely integrate, and support as a first class citizen, IaC that cover your complete cloud environments, then what you get are portal screens that allow you to enter CI and CD pipeline parameters, to initiate those pipelines, and see the results in a different portal screen.
In other words, not much more than you get with you existing CI/CD toolset.
There is a real need for Backstage to extend it’s data model to have entities like Environments, IaC Modules, and IaC Solutions. Where IaC Solutions utilise IaC Modules, and are deployed into Environments; and IaC Solutions are linked to a deployed Service (as defined by Backstage).

What did I create and what did I Learn?

First Steps

So, after trying to see what Google U had for me, and found articles like https://tekanaid.com/posts/backstage-software-templates and https://medium.com/@_gdantas/backstage-and-terraform-a-powerful-combination-for-ops-wonderful-for-devs-c04ebce849f0 I started to think:

  1. you could only define existing Terraform by creating a new, separate, github repo that took the original code and housed it with a BackStage catalog
  2. Scaffold Templates have to be in files call template.yaml and kept separate from the newly created GitHub repo and, so, making the management of multiple templates difficult.
  3. you couldn’t re-use such templates

Second Steps

Starting from of this, disappointing knowledge, I started to look at the Documentation. Originally, just reading sections that I thought would describe more, and better ways, of integrating with iaC. While getting quite confused about the amount of assumed knowledge each section relied on (more about this very soon) I got to understand that:

  1. you didn’t have to create a new GitHub repo to integrate your IaC
  2. you could use the form at http://<backstage host>/catalog-import and input the existing GitHub repo URL
  3. This would end up generating a branch called backstage-integration and a PR to merge the catalog-info.yaml into your existing repo
  4. When merged the repo would show up as below in Backstage (if you chose “Infrastructure” as the owning group… also the CI/CD, GitHub Actions, API, and Dependencies menu items would not be there)
The newly defined IaC component

As you can see, the IaC isn’t considered a first class citizen via this method, and gets type of “unknown”.

Reseting

What I learnt from my attempts at this point was that I really didn’t understand how Backstage was meant to work, and only had a piecemeal view of using this product, so I went back from scratch and read the documentation from start to finish — at least to a point that wasn’t about significantly customising BackStage.

Unfortunately, this is how the documentation is structured (remember me talking about assumed knowledge?). The Spotify Engineers require the users of the product to go on a journey of understanding what Backstage is, it’s design, and it’s configuration points. There are no summaries, and the tutorials/examples are for advanced use, not standard use. And some of it out of date :-(. Also, a lot of the useful information can be lost unless you read carefully.

Anyway, What I learnt was that you can repeatably integrate your IaC repo’s with not a great deal of effort by:

  1. re-using and customising a catalog-info.yaml file and adding it to your existing repo:
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ecs-cluster-terraform
annotations:
github.com/project-slug: pogo61/ecs-cluster-terraform
spec:
type: service
lifecycle: development
owner: infrastructure

Notice that from the default file generated by the http://<backstage host>/catalog-import we have “type:service”, “lifecycle: development”, ensure that you change this to the GitHub repo youb are integrating “github.com/project-slug: <owner>/<repo name>”

2. re-using and customising (to work with your existing CD pipeline) a template.yaml file and adding it to your existing repo:

# Define the API version and kind of resource
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
# Template metadata. Here's your intro to what this template does.
metadata:
name: ecs-cluster
title: Deploy an ECS Cluster
description: Deploy an ECS Cluster using Terraform
annotations:
github.com/project-slug: pogo61/ecs-cluster-terraform
# The specification for how the template behaves
spec:
# Who owns this template. Generally, it could be a team or individual
owner: user:guest
# The type of service this template deploys
type: service
# User-input parameters. Makes your templates dynamic!
parameters:
# Ask the user to input some basic app details
- title: Fill in some steps
required:
- environment
properties:
environment:
title: Environment to deploy into
description: either test, production, staging, management, or sandbox
enum:
- test
- production
- staging
- management
- sandbox
ui:autofocus: true # This field gets auto-focused in UI
ui:options:
rows: 5 # Number of rows in the input area
# Ask the user where they want to store the code
- title: Choose a Repo location
required:
- repoUrl
properties:
repoUrl:
title: Repository Location
type: string
ui:field: RepoUrlPicker # A special UI component for selecting repo URLs
ui:options:
allowedHosts:
- github.com # Allowed hosts for repository
# Parameters for setting up the EKS cluster
- title: Basic ECS Cluster Configuration
required:
- region
- action
properties:
region:
title: AWS Region
type: string
description: The AWS region where the cluster will be deployed
enum:
- eu-west-2
- eu-west-1
action:
title: Action
type: string
description: Action to perform (apply/destroy)
enum:
- plan
- apply
- destroy
# Steps that the template will execute in order
steps:
# Fetch the base template
- id: fetch-base
name: Fetch Base
action: fetch:template
input:
url: ./content # Where the base content is stored
values:
name: ecs-cluster

# Trigger a GitHub Action to set up the ECS cluster
- id: github-action
name: Trigger GitHub Action
action: github:actions:dispatch
input:
workflowId: deploy.yml # GitHub Action workflow ID
repoUrl: ${{ parameters.repoUrl }}
branchOrTagName: 'main' # The branch to run this action on
workflowInputs:
environment: ${{ parameters.environment }}
awsRegion: ${{ parameters.region }}
action: ${{ parameters.action }}

3. Defining both of these files in the <backstage home>/app-config.yaml file in the catalog.locations section

catalog:
import:
entityFilename: catalog-info.yaml
pullRequestBranchName: backstage-integration
rules:
- allow: [Component, System, API, Resource, Location, Template, Domain]
locations:
# ecs terraform code catalog
- type: url
target: https://github.com/pogo61/ecs-cluster-terraform/blob/main/catalog-info.yaml
rules:
- allow: [User, Group, Component]

# ecs terraform code template
- type: url
target: https://github.com/pogo61/ecs-cluster-terraform/blob/main/template.yaml
rules:
- allow: [Template]

4. The GitHub Action Workflow file is:

name: 'Deploy ECS Cluster'

on:
workflow_dispatch:
# Define inputs that are required for the manual trigger
inputs:
environment:
description: 'Name of the ECS cluster' # What's this input for?
required: true # Is it optional or required?
awsRegion:
description: 'AWS Region for the cluster'
required: true
action:
description: 'Action to perform (apply/destroy)'
required: true

jobs:
plan:
name: run Terraform Plan
runs-on: ubuntu-latest

# Only run this job if the action input is "plan"
if: ${{ github.event.inputs.action == 'plan' }}

# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash

steps:
- name: Checkout
uses: actions/checkout@v3

- name: Configure AWS credentials from Test account
id: creds
uses: aws-actions/configure-aws-credentials@v4
with:
role-skip-session-tagging: true
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: inputs.awsRegion
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
role-external-id: ${{ secrets.AWS_ROLE_EXTERNAL_ID }}
role-duration-seconds: 1200
role-session-name: ecs_deploy

# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}

# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
working-directory: ./deployment/${{ inputs.environment }}
run: terraform init

# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
working-directory: ./deployment/${{ inputs.environment }}
run: terraform fmt -check

- name: Plan
working-directory: ./deployment/${{ inputs.environment }}
run: terraform plan -var="environment=${{ inputs.environment }}"

apply:
name: run Terraform Apply
runs-on: ubuntu-latest

# Only run this job if the action input is "apply"
if: ${{ github.event.inputs.action == 'apply' }}

# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash

steps:
- name: Checkout
uses: actions/checkout@v3

- name: Configure AWS credentials from Test account
id: creds
uses: aws-actions/configure-aws-credentials@v4
with:
role-skip-session-tagging: true
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ inputs.awsRegion }}
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
role-external-id: ${{ secrets.AWS_ROLE_EXTERNAL_ID }}
role-duration-seconds: 1200
role-session-name: ecs_deploy

# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}

# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
working-directory: ./deployment/${{ inputs.environment }}
run: terraform init

# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
working-directory: ./deployment/${{ inputs.environment }}
run: terraform fmt -check

- name: Plan
working-directory: ./deployment/${{ inputs.environment }}
run: terraform plan -var="environment=${{ inputs.environment }}"

- name: Apply
working-directory: ./deployment/${{ inputs.environment }}
if: github.ref == 'refs/heads/"main"' && github.event_name == 'push'
run: terraform apply -auto-approve -input=false

5. Restart backstage

6. When defining Authentication integration with GitHub, as per https://backstage.io/docs/auth/github/provider
please note that the first step: ​​https://backstage.io/docs/auth/github/provider#create-an-oauth-app-on-github
is only applicable if your GitHub license supports organisations. This is because no permissions are defined when you define a GitHub OAuth App — it relies on the permissions set in the organisation.
So if you are using a personal GitHub Account to test Backstage make sure you create a GitHub App from your account’s
https://github.com/settings/apps
menu, not an “OAuth App”. Creating a GitHub app will allow you to define the requisite permissions for Backstage to work with GitHub.
Oh, and if you are wondering what the permissions need to be, they are defined at
https://backstage.io/docs/integrations/github/locations#token-scopes
(notice this is in the integrations section for the definition of a PAT, which you do need to do also).

Summary

In Summary:
1. even though these steps will define a Portal page that will allow you to define the Pipeline parameters and kick off the pipeline, it doesn’t give you much more that you already have.

2. The Component created is not understood as an IaC component, nor does it identify the modules that make it up

3. This component is not linked to any service that may be deployed into it

4. The backstage model needs to be extended to identify and integrate this concepts before it can be used as a useful Internal Developer Platform.

--

--