How backend-channeling improved 24SevenOffice’s development and QA processes.

Anders Bærø
24SevenOffice Tech Blog
7 min readMar 29, 2023

by Anders Bærø, Team Rocket.

In the last year, 24SevenOffice has started moving towards AWS as our main development platform. During this period, Team Rocket started our “VoucherWorkFlow” project. Naturally, we started planning how we could utilize AWS as our main platform. Using the expertise of our AWS platform team “Team AWSome”, we were able to build this project in AWS from scratch.

After some initial growing pains adjusting to the “AWS Way”,
we experienced both the quality of life improvements that AWS affords us, as well as some minor bumps in the road.

The Problem

With three backend developers on our team and only one environment in AWS dedicated as our “dev” environment, we were finding ourselves constantly playing “king of the hill”, with each of our commits/deployments fighting for precedence on the dev environment.

Obviously, this was quite an annoyance for the developers while working on new features or while trying to test an entire flow on the deployed version, but it also negatively affected the QA team.

With limited knowledge of how to check deployment and commit history, they struggled to identify which features/bugs/changes were live in the dev environment. We all know the frustration of testing the same functionality twice, with different outcomes, seemingly without any changes in the code.

“Insanity is doing the same thing over and over and e̶x̶p̶e̶c̶t̶i̶n̶g̶ experiencing different results.” (Slightly altered) — Albert Einstein

Thus came the need for a way to ensure that both devs and QA could target different versions of the dev environment based on which feature they were working on or testing.

The Solution

What is backend-channeling?

Backend-channeling is the concept of allowing access to multiple versions of the same application, within the same environment simultaneously. Let’s say you have an API hosted at https://medium-project.dev.api.io. You have three backend developers all working on different features/stories within the same project. How can you allow QA/devs to test said functionality at the same time?

More traditional approaches would require quite technical solutions, however, in AWS the solution was surprisingly simple. Developing with AWS, you deploy what’s called a “stack”.

… a single unit … that holds the AWS resources in the form of constructs and can be used for deployment …
-https://towardsthecloud.com/

This stack lives in your AWS account and is usually named after your project, in this case, our stack would normally be named “medium-project”.

When deploying, if using the same name, you will just deploy the difference between what’s already deployed, and what you are deploying. If you were to give the stack a different name, you would create a whole new stack altogether.

This means that if we could somehow dynamically allocate “unique” names to the stack, we could be able to access them separately, as they would be hosted/reachable at different URLs.

Below I will explain how we accomplished this. In this case, the examples will be given using AWS CDK and .NET/C#

Implementation

So, how do we allocate unique names to the stack, when commits that require their own stack are pushed?

We thought that using the branch name would be a suitable solution for this. Let’s say you have a task/story in Jira named “ROC-123”. In our development flow, this means we would create a git branch named “feature/ROC-123”. If we could extract the branch name, and ascertain that the branch contains a reference to a Jira task/story, we would then be certain that this branch requires its own stack.

Luckily, while using GitHub Actions, they provide a set of default environment variables that can be used during your CI/CD jobs. One of these is “GITHUB_REF_NAME”, which contains the name of the branch.

We can then use said branch name to determine whether or not we should create a separate stack for this branch.

When we create our stack, we perform some regex-matching on the branch name to determine whether or not this specific branch should have its own stack. In our case, the determining factor is as follows; if the branch name contains “ROC-xxx” where the xxx represents our Jira task numbering system, it will generate its own stack name. In other words, if the branch you are working on is connected to a Jira story/task, it will have its own stack.

Since our “normal” dev environment is reachable at
medium-project.dev.api.io, this means that our new “feature-stack” (or backend-channel) will be reachable at roc-123.medium-project.dev.api.io.

That's the basic gist of how we have implemented this functionality.

Now, what exactly are the pros and cons of this?

QA Processes

As previously mentioned, one of the biggest benefits this solution provides is that each developer will be able to focus entirely on their own task, without the fear of having another developer commit somehow altering the behavior of your environment. When you have your own stack, no other external changes can disrupt your workday.

But if we look a bit further than just what the developers gain from this, the first benefit we noticed was that the QA team could easily distinguish which environment (and by extension which feature/task) they were testing. Whether they test the API via Postman, or from our GUI, they can choose which environment/channel they want to target.

Testing with postman is relatively easy, all you would have to do is prepend the environment defined as “roc-123” to any URL you would test against. Testing using the GUI is a slightly more difficult task. To do that we would have to implement the concept of backend-channeling in the frontend. I will not go into the depth of the implementation itself, and rather focus on how we use it.

To define which backend-channel you want to target, you simply append “env=roc-xxx” as a query param, like so:
medium-project.io/module?env=roc-123.

We have previously implemented frontend-channels, which work in a similar way, but I will not go into details now. All you need to know is that you can select frontend-channels and backend-channels in the address bar using query params. These values are then stored in cookies, which means that if you navigate around in the GUI, you will have no easy way to determine which channels you are using.

The top bar in our beta version of 24SevenOffice application.

So, without a way of visually seeing which channels you are using, it just made matters worse as you were in a constant state of confusion as to determine which channel combinations were actually being used. This is why we implemented an indicator in the top bar of our beta version of the 24SevenOffice application.

This allowed the QA team to test functionality across environments, with combinations of both backend and frontend-channels.

Integration testing

Integration testing is another big benefit. With backend-channels implemented, the CI/CD pipeline can run integration tests against the specific environment of the branch you have pushed. This allows us to block any PRs from being merged if the integration tests fail on that backend-channel. Which in turn should ensure that the main branch/prod has always passed integration tests.

Integration test flow using PlayWright and GitHub Actions

Well, this sounds perfect!

Yes, but also no.

We discovered a few minor irritations, none of which initially seem bad enough to take anything away from the gains.

Running locally

One of the issues we needed to get used to was that since you are deploying a new stack every time you create a new branch, is that every reference you have in your local environment to AWS resources will have to be updated.

Let’s say you have a simple application running a single AWS Lambda function that accesses two different DynamoDB tables and an S3 bucket.

For each new feature you work on, you will first need to deploy, then look up what the various resources you need access to are named in AWS.

This could be somewhat semi-automated by outputting the needed env variables during the deployment process. Since some IDEs store env variables in different formats, you would need to output it in the formats you require to support your IDE. Then it would just be a matter of copy-pasting the output into your env variable file.

Initial deploy

The first deployment of a stack always takes a bit longer than the following deployments, since it contains a bigger changeset. This means that if you have a huge application, it will take some time for the backend-channel to be up and running after the initial commit. This can be solved by either manually pushing an empty commit every time you create a new branch, or by somehow automating a deployment at branch creation.

Resource pileup

Since we are creating an unknown amount of features, they will (no pun intended) stack up pretty fast. One way of handling this is to add functionality that self-destructs the stack after a set amount of inactive days or include a “review” action in your GitHub Actions flow, that deletes the stack when a PR is merged or closed.

Some might be concerned about cost since we are creating an unknown amount of stacks, but it’s important to remember that the real cost of AWS is runtime, and not necessarily the number of resources created.

Conclusion

The quality of life improvements that backend-channeling provides to both QA, devs, and integration testing far outweighs any concern about resource pileup, initial deployment time, or having to update your environment variables more frequently.

--

--