Angular Interceptors — How we sped up development

Michal Haták
Jun 26, 2018 · 3 min read

At AUTOPROP, we’re always looking for ways to streamline our development processes. Our team is in beautiful central Europe — Budweis, Czech Republic, but our customers are in the far West of Vancouver, Canada. And, because we heavily leverage AWS, our production and staging servers are in Portland, Oregon. Distance = latency, and for years now we’ve just dealt with it.

Response from Budweis to AWS regions — thanks http://www.cloudwatch.in/

Last week, I was chatting with a good friend of mine about this, and he suggested we try — queue the buzz word — ‘backendless’ development in Angular. He felt we could save a lot of time and headache waiting for our far-away servers. So, naturally, we decided to give it a shot.

Essentially, his suggestion was to cache every single request in our application, and by doing so, build a resource to provide prepared server responses immediately. With this suggestion and a code example from him, I got to work right away.

After some tinkering, I ended up with what is essentially a snippet of code. Just a few lines which will now save us hours and hours of waiting time.

To play devil’s advocate to this idea, one of my team members asked why we would do this when we could, just as easily (or maybe even easier) spin up servers in Frankfurt which are very close to our office. But, while this is true, because of how our system works, many 3rd party data requests go back to the US and Canada, so there is still quite a bit of waiting time. As well, spinning up new servers costs money, and time to manage them, whereas caching requests is entirely local and costs no server time. We’re always trying to minimize server costs and resource requirements.

So, how did we do this?

In the end, we placed an interceptor in our application. The interceptor checks if a request matches a given pattern, and instead of returning a server response, returns a static answer instead.

What is the interceptor? Well, it’s basically a piece of the code, which steps in when a request is made and can control the response. (BTW: we also use this method in a similar way to save money on map tile requests by our clients — we cache map tiles on the client side through service workers). Based on the code, the request goes through to the servers or is intercepted, and the response is either faked or a modified response is returned. The biggest advantage: this is all automatically handled, which means no other changes need to be made.

Let’s look at how this works. The most basic interceptor which logs every request could look like this:

Once we have that code, the only one thing we need to do is to register the code into the main module as a provider.

Easy, eh? So now, let’s make an interceptor which checks every request against a pre-defined map, and which will eventually return pre-defined server responses.

So, in general, we now have a map of the possible requests (paths and methods). In practice, once a request meets those criteria, we return a pre-defined response from a file (defined as the last parameter of the map). As well, we can turn these responses off in the config, mostly for the cases where we would like to test some functionality that could be blocked by them.

Hope you enjoyed this article and I’m looking forward to your comments or suggestions. You can also follow me on Twitter. :)

Michal Haták
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade