Handling Angular environments in continuous delivery

Combining the ideas from continuous delivery with Angular

Kevin Kreuzer
Mar 18, 2019 · 10 min read

In non-trivial business applications, we nowadays often encounter a continuous delivery setup with multiple stages.

Each stage has its configuration to access environment specific peripheral systems. To do so, we need to deal with stage based configurations.

The Angular CLI already comes with some build in concepts to handle different environments.

But, how reliable are they in a continuous delivery environment? 🤔

Continuous delivery — a short introduction

Continuous delivery results out of the agile movement. Iterative feedback is at its core. Learning from hands-on experiences and incorporating that feedback is essential.

Continuous delivery takes agile ideas and defines an approach to deliver automatically tested software in short cycles.

Stages in continuous delivery 🎦

A typical pattern in continuous delivery is staging. In a typical continuous delivery setup, we have different stages with different purposes.

Each commit results in an execution of the pipeline on our CI server. Successful runs on the master branch results in a deployment on one of our stages.

Source: https://www.mindtheproduct.com/

In the example above the test stage contains a snapshot of the master branch. In certain intervals, for example, after each sprint, we can deploy to the staging stage. This is the stage where the final acceptance tests are made. Once accepted the artifact is moved to production.

Of course, some companies have a faster time to production. Some even deploy each commit directly to production.

But in most business-related apps, it’s nice to have a stage before production. This allows our product owners, business representatives, and testers to have a final look and verify the business logic.

Build once — deploy everywhere 🏗️

When using stages, it is essential to build your artifact only once and move it from one stage to another.

This approach guarantees that the same artifact that was tested also enters production.

We can see that we execute our pipeline once — build our artifact once. The pipeline checks out the code, installs the dependencies, runs the tests, builds our artifact and pushes it to a central repository.

Another post build pipeline then gets triggered and deploys the artifact to our development stage. This is just a snapshot of the current state on our master branch.

Once we decide to publish the artifact to another stage, we do not build it again. We fetch it from a central repository and deploy it to the desired stage.

Angular CLI environments and continuous delivery, does it fit? 🤔

The Angular CLI comes with a built-in mechanism of environment files. Let’s have a quick look at how they work.

When generating a brand new CLI project, there is a environment.ts and an environment.prod.ts. Additionally, to those files, we can always add new environment files inside the environment folder and configure them in your angular.json.

During the build, the desired environment is then passed as an argument to the build command. The CLI then uses the correct environment file and overrides the default one during the build which we can then import with the following line.

import {environment} from '../environments/environment';

If you are more curious about environment files in Angular, I recommend you to read the excellent article about “Becoming an Angular Environmentalist” from Todd Palmer.

Now that we know about the build in concept from the Angular CLI. Let’s have a look at how it matches with the principles from Continous Delivery.

So the critical thing to notice here is that Angular CLI sets the environment during the build time. This means that we need to build our app before each deployment on a specific stage.

This approach harbors a few risks.

If you build your application each time before you deploy, there might be a chance that the artifact you tested is not the same as the one that enters production.

The artifact doesn’t just depend on your code — it also depends on third-party libraries, operating system updates or other environmental changes that happen over time.

Just think back to the times before we had a package.lock.json. Back then the builds were even less predictive. Third party libraries often appear with a ^ inside our package.json. Despite semver, new versions of third-party library sometimes produced incompatibility with another dependency which caused your application to break.

Continuous delivery and the environment files provided from Angular CLI do not fit!

In continuous delivery, your artifact needs to get environment specific configurations at startup or at runtime. The Angular CLI, on the other hand, gets those configurations at build time.

So how to combine the ideas from continuous delivery with our Angular application?

There are different approaches to combine the ideas of continuous delivery with the Angular CLI. Each one comes with advantages and downsides.

Let’s have a look at them.

Provide environment configuration over a REST endpoint

The Angular application doesn’t contain the possibility to access runtime environment because it runs in the browser.

But in most cases our frontend does not come alone, it also needs some backend services where it fetches data from or pushes data to.

And guess what, the backend has access to environment variables. So let’s use that to our advantage and fetch our environment specific configurations from the backend.

All we need on the backend side is a REST endpoint that delivers the configurations. Depending on your backend the way of accessing environment variables differs. So let’s focus on the Angular part.

Let’s build ourself a ConfigurationService which fetches the configurations.

The backend delivers us a configuration object with three properties. resourceServerA, resourceServerB and a stage which we load via a standardHTTP request. Nothing fancy.

We use some RxJS shareReplay operator to build a caching behavior for the configuration. With this approach, we prevent the creation of another XHR request when we call again loadConfigurations again. Each new subscriber gets the cached configuration.

A complex environment can require dynamic configuration. Configuration values which may or may not change at runtime, for example, feature toggles. In such scenarios, the caching strategy used above needs to be extended.

Nice! This approach accesses the configurations from a backend. But what if we are not in control of the backend. Let’s say we are only responsible for the frontend and access some external backend services. So we can not influence the backend.

In such cases, we would need to build ourselves a backend service that delivers our SPA and also provides the REST endpoint to read the configurations.

But, we want to keep our setup lightweight. We only want a simple web server that delivers our SPA.

Host configurations as assets — mount configuration files per environment

Instead of fetching the configuration over a REST endpoint we directly fetch a JSON file with configurations which lies in our assets folder.

So let’s create a config folder inside assets and put a JSON with the local environment specific configurations in it.

But how do we load the configuration.json? Almost in the same way as we did before. The only difference is that we do not fetch from the REST endpoint but from the assets folder.

Awesome! The ConfigurationAssetsLoaderService now loads our assets file which contains all configurations. But how do we change those configurations in relation to the current environment we are in?

We will merely host our configuration on each stage and then mount the configuration.json file into the assets folder. When your pod starts, we mount your configuration into the assets/config directory.

It is important to notice that you create a config folder inside the assets. folder. We can not use a flat hierarchy. When performing a mount, all files inside the mounted folder will be deleted.

The concrete way on how to mount volumes depends on your CI tool. We at Trasier use OpenShift for our deployments. OpenShift provides us with ConfigMaps which can either be a property or even an entire configuration file. So on Openshift, we have different stages. Each stage then hosts specific configurations and mounts those configurations into our assets/config folder on the pod startup.

Ok. Great! We have now seen two approaches whos client-side implementation is very similar. We created a service that will fetch configurations either from a REST endpoint or from the assets folder.

So we have seen two ways on how to use a service to access configurations. But when do we call those services?

Well, short answer, it’s up to you. There are different times where it makes sense to call them. Each one comes with pros and cons.

When to fetch configurations?

Call it as soon as you need it

In this approach, we call the loadConfigurations method of the ConfigurationService as soon as we need it. For example on a click that triggers a request to resourceServerA.

Notice that the first time we do so the HTTP request to resourceServerA waits until the request to our /configuration endpoint finishes. All the subsequent requests then work as usual as they get the cached configuration.

Call it in our App component

Similar to the approach above you can initially fetch the configurations inside the constructor of your AppComponent. This approach is especially useful when you display an initial screen that doesn’t require any server data.

Again the configurations will be fetched. All subsequent subscribers then get the cached configurations.

Call it during app initialization

Angular allows us to call functions during app initialization. To do so, we take advantage of the APP_INITIALIZER token.

We provide the APP_INITIALIZER token in combination with a factory method. The factory function that is called during app initialization must return a function which returns a promise.

The factory method returns a function that calls the loadConfiguration function which fetches the configuration from the backend.

This approach comes with one downside. Even though the initial request to fetch the configurations should be fast, it is still blocking the startup of your application until the XHR request finishes.

So as you see, you have different ways to call the service. There’s still one more approach which doesn’t use a service at all.

Override configurations per environment

In this example, we use Angulars environment files as they come. A environment.ts and a environment.prod.ts.

Even though we have more stages than just production and development we only distinguish between those two. For local development, we use the environment.ts file. All the other stages are handled by environment.prod.ts.

But how?

Our environment.prod.ts does not contain the actual values; it contains placeholder values which will be overwritten per stage by a build script.

An example environment.prod.ts file could look like this.

export const environment = {

When we start our web server we can then use a custom start.sh script which will replace the placeholders.

We then execute this script at startup. For example inside our Dockerfile.

This approach, at least to me, feels kind of “hacky”. Overwriting strings in a bundle is probably not the most delightful way. It furthermore harbors the risk of overwriting something which should not be overwritten.

If you still decide to use this approach, it is super important to have good placeholders. Use some special characters which you usually do not use in variable names.


Angular comes with environment files that allow us to handle environment specific configurations. They do not meet the requirements of a continuous delivery setup.

Angulars environment files are used during build time. In continuous delivery, it is essential that we deploy the same artifact on different stages. Therefore we need to pass in environment configurations at startup or runtime.

Depending on your setup we can load configurations via a service. Either directly from a backend or from our assets folder.

When doing so it’s good practice to cache them.

🙏 Please give some claps by clicking on the clap 👏🏻 button below if you enjoyed this post.‍‍🧞

Claps help other people finding it and encourage me to write more posts

Feel free to check out some of my other articles about Front End development.

Angular In Depth

The place where advanced Angular concepts are explained

Kevin Kreuzer

Written by

Passionate freelance frontend engineer. ❤️ Always eager to learn, share and expand knowledge.

Angular In Depth

The place where advanced Angular concepts are explained

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade