Two years of Backstage — behind the scenes of our developer portal: The first collaborative plugin

Nicolas Thibaut
Peaksys Engineering
6 min read5 days ago

TL;DR version

In the first article of our behind-the-scenes series, we explained why and how we chose to use Backstage as our developer portal. In this second article, we’re going to look at how we implemented our first Backstage feature, designed to test its ability to integrate into our platform and see how it coped with a complex implementation. This test feature will set the stage for subsequent features.

The challenge

Backstage was up and running in production and provided a clear, interactive and customisable view of the IS, as well as the ability to create .Net applications in just 3 clicks. Next we added Python and Java.

We then wanted to test the ability of Backstage to integrate with our platforms to provide a richer experience, and to check the feasibility of “golden paths”:

Access to and management of our on-premise platforms by developers from a single self-service portal. We also needed to assess the benefit and relevance of Backstage’s ready-to-use plugins, developed by the Backstage community.

The feature

We chose to be innovative by adding a new feature to Backstage that would provide significant added value. The team responsible for our Kafka platform had just completed migrating it to Terraform. Previously, creating a Kafka topic meant opening a ticket and took an average of five days to complete. Around 100 tickets are opened over six months, which is equivalent to 1,000 days of waiting per year. We’ll be talking about how we prioritise as well as Jobs To Be Done in the next article. With Terraform and the team’s CI/CD pipeline, creating a Kafka topic is now as easy as a simple commit. We therefore decided that Kafka would be our next Backstage integration, and since commit actions on Azure DevOps were already available, the implementation cost would be low.

Backstage workflow on Kafka creation

The philosophy

We have adopted the following philosophy when integrating new features into Backstage:

  • Features must be made available as viewable resources.
  • We will always implement actions through the scaffolder, ensuring traceability and consistency across all component creation and handling features.

The visualisation

To integrate Kafka topics effectively, we first tested the Kafka plugin. However, we quickly realised its limitations: it only supported a single cluster, making it impossible to view our topics across all our clusters.

We have the following environments:

Development, Acceptance Test, Pre-production, Production DC1 and Production DC2. Each has their own cluster.

We therefore developed a processor similar to the one used for applications. However, instead of querying La Carto (our technical repository of components on our platform), it queries the development cluster to retrieve the list of topics and convert them into resource-type documents in Backstage. Displaying the topics was quick and efficient.

On the topics page, we added a custom React component that queries the backend to list the status of the topic on all environments. Once we had configured the addresses of our Kafka clusters, we were able to display a table summarising the status of topics on the different environments: their presence, their absence and their partition parameters, where applicable.

Multi cluster sourcing

Our La Carto repository also provides information on Kafka topics and their owners. Our process logs in and adds the team that owns the topic and its associated system, so each team can easily filter and display “their” topics.

Kafka plugin on Backstage

This integration once again demonstrates Backstage’s flexibility when it comes to adding technical resources, with rapid, seamless development, even without using pre-existing plugins.

The creation

We held a productive workshop with the Kafka team and came up with an efficient workflow:

  • Fill in a form in Backstage to initiate the request.
  • Automatically send an auto-merge pull request to their Terraform repository.
  • Check the integrity of the pull request using the CI pipeline.
  • Run the auto-merge and then automatically deploy via the CD pipeline.

Creating the form in Backstage went without a hitch, as all the parameters we needed were readily available. To create the pull request, we used two custom Backstage actions that were already available: the git pull and the git commit/push. We were already familiar with these actions as we’d used them to create applications.

The next step was to create two custom actions:

  • Modify the repository to include our new file.
  • Create a pull request from the new push branch.

The Kafka team has provided all the necessary information, so mapping the form fields to a Terraform file was straightforward. Handling the YAML with NodeJS was easy, and creating the pull request was a trouble-free experience, thanks to the comprehensive documentation for the Azure DevOps APIs. We also had all our previous configurations to draw on, which speeded up development.

Kafka creation form with cost $

Who owns this topic?

After a few adjustments and a final synchronisation with the Kafka team, we were able to create the first topics. However, one challenge remained unresolved, that of assigning topics automatically to their owners. Previously, this was done manually via Jira and La Carto. To solve this problem, the La Carto team developed a Terraform scanner to list the topics, adding a field with the system identifier as metadata.

Backstage fills in this field when the topic is created, and the user selects the system from a drop-down list when prompted. As Backstage already stores these systems (see previous article), listing them on a form costs nothing.

The final workflow looks like this:

  • Fill in the form in Backstage.
  • Send an auto-merge pull request to the Terraform repository.
  • Deploy via Terraform.
  • Create the topic as a temporary resource in Backstage.
  • Scan Terraform using the mapping engine, to map the topic to its system.
  • Scan the topics on the cluster, replacing the temporary resource and fetching the proprietary system from the mapping.

This automation has simplified the process of assigning topics to their owners, so managing Kafka resources is now more efficient and transparent for all the teams involved.

The circle is complete

Today, our developers create their Kafka topics in three clicks using Backstage across all our environments. The form also rejects any complex cases that actually require a ticket (e.g. high volume, long retention, or with compaction).

Conclusion

Creating a custom plugin for Backstage turned out to be a straightforward task, especially with the support of the platform teams, who made the process much smoother for the development team. The system works best when the platform team and the Backstage team work closely together.

The main difficulty is harmonising the terminology used by the various teams, to improve mutual understanding and keep the teams focused on the initial customer requirement. It is crucial not to overload the form with complicated information, to constantly look for ways of simplifying it, and to not shy away from dealing with specific cases outside the established framework. If we successfully automate 80% of simple cases, we can continue to manage the other more complex 20% through tickets.

This approach concentrates efforts on automating the most frequent and simplest requests, while maintaining a structured method for out-of-the-ordinary requests. This ensures efficient management and a better user experience, while retaining the flexibility needed to handle exceptions.

--

--

Nicolas Thibaut
Peaksys Engineering

Head of Software Factory & Devex at Peaksys, tech subsidiary of Cdiscount, a french e*-commerce site.