A Frontend Monorepo for the Rabobank?

Jouke Visser
Sep 21, 2018 · 12 min read

With the advent of Senses 2.0, the plan is to have all code related to this new Frontend platform (the platform code itself, the applications running on it, all features and pages and all tooling) in one Git repository. We call this a Monorepo. As it turns out, at the Rabobank there’s a lot of scepticism about this plan. While a lot of frontend developers foresee many benefits, Product Owners and Architects mostly foresee a lot of problems coming from it. Since we can’t seem to agree beforehand, we decided to start a pilot soon with 8 different teams, each working on their own small feature, all of them presented on one page. The teams are very diverse: one has never before built something in RBO (our Retail Banking application), three teams reside mostly in India, other teams have been building features in RBO since its inception. We expect this pilot to surface any issues stemming from using a Monorepo, so we can have a good discussion on how to move forward.

In this article I will try to explain the reasoning behind this effort, as well as to address some commonly heard reasons not to go this way.


Our current situation

With our current Frontend Platform -Senses 1.0, based on AngularJS- we created a way for teams to operate fully independently. There is a platform team that maintains and releases the application shell (Senses Shell), the Frontend API (Senses Runtime), and all other platform projects, and there are about 30 feature teams, who create, maintain and release their features, without the need to consult the platform team. The glue between the Platform and the Feature Teams, is our Content Management System Tridion, which maintains the pages that the platform loads, and which contain the features created by the feature teams. If needed, the Content Management System can reshuffle features across pages, again without the other teams having to be informed about it. In practice, of course, there is communication about this, but technically there would be no need.

The way this works, is that as the user navigates, the Platform code loads an HTML fragment generated by Tridion, parses it, comes across a senses-module-app directive which defines a feature, fetches the assets of the feature (which we call a static), compiles the corresponding Javascript in runtime and executes it. The platform projects and feature projects are maintained in their own Git repositories, and have their own build pipelines to eventually end up getting deployed to the statics servers. Because features are built from a boilerplate project, they always carry the same signature, which allows the platform code to load, compile and execute features the same way, without knowing anything about them - all in runtime.

This model scales to an infinite number of teams working on the same application.


No Angular compiler in runtime

After we decided we would move from AngularJS to Angular, the thought of changing this process had not crossed my mind yet. Sure there were downsides to it (see below), but having to change our Platform technology was enough to have to deal with.

So I tried to design a system based on Angular that would work the same. By that time, Angular 4 was the most recent version, and much to my surprise, I found that the Angular Core team had just decided to remove the JIT compiler from Angular’s runtime. This meant that there would no longer be a way to compile and execute separately deployed features in runtime. Instead, at compile time, everything needed to be pulled together to create a deployment of the whole application, including features.

This turns out to be a deal breaker with regards to implementing the same deployment model we currently have. To be clear: this does not automatically mean the only way to solve it is to work in a Monorepo. There are different ways to go about it. What it does mean, is that with every release of any feature, we’d have to redeploy the entire application. At some point in the build process we’d have to pull all platform, page and feature code together to create a new build. So either we keep working in separate repositories, and every deployment triggers a step that pulls everything together and create a new build, or we make sure all code already is together.

There is another way to achieve this, which I haven’t completely ruled out yet. However, this technique seems a bit too new to start using already. I’m talking about Angular Elements. Angular Elements is a way to wrap your Angular Component into a Web Component. If we would do this, the resulting Web Component could be deployed to our statics servers, and we’d only have to build the application with its pages. However, when I say it seems a bit too new to start using already, I’m basing myself off a presentation I attended at ngEurope in February of this year, where this was demonstrated. A simple ‘Hello World’ Angular Component, resulted in a Web Component of ~500kb. That’s way too large. I expect this problem to be much less once Angular Ivy is released, but for now I’m not considering this to be a realistic alternative for the short term.

So in short: our deployment model needs to change. At some point before we actually create a new release, we will have to have everything that makes up the application brought together before we call the Angular compiler.

If we consider the option to continue working in separate repositories, we’d have to design (and create tooling for) a process to gather all features and the application and pages together on one build server and kick off the compiler process. This in itself sounds simple, but determining which features and their versions to gather is rather complex, and would be a relatively slow process, impeding a fast deployment.

This lead me to at least look at the scenario:


Angular release schedule

The people working on Angular are very clear about their plans on releasing new versions of Angular. So far they have pretty much lived up to their schedule. According to the Angular website:

In general, you can expect the following release cycle:

* A major release every 6 months
* 1–3 minor releases for each major release
* A patch release almost every week

A major release every 6 months.

Let that sink in for a little bit…

That means that if we want to stay up to date with the latest developments in Angular, we’ll have to prepare for breaking changes every 6 months. Now, think of our distributed setup. We can release platform code separately from feature code. That’s great, but there’s one big exception I haven’t mentioned yet: we cannot release breaking changes in platform code, without verifying with the feature teams nothing breaks in our features. When we want to roll out a platform version with breaking changes, we need to go through a very time consuming process of having each feature team check their feature against the release candidate of our platform and report back when they’re done making necessary changes. If we’re lucky, they can make changes so their feature works in both the old and the new version. In case they can’t, we need to do a coordinated release of the platform and all features at the same time.

This limitation in our distributed setup, is exactly why we stopped rolling out new AngularJS versions after AngularJS 1.5.11. It would easily take us 2 to 3 months to get every team ready and do a coordinated release. Try to project that on the Angular release cycle. We would probably be spending most of our time keeping up with Angular, instead of building new functionality and cool features.

Within a Monorepo setup, we of course still need to prepare for breaking changes every 6 months. The big difference however would be the time it takes to make all necessary changes. Instead of having to ask all teams to make changes, and having to perform a coordinated release of all affected code, a small group of dedicated developers (a platform team, or even a part of it) could create a branch, perform the upgrade, make all necessary changes in platform and feature code and provide a pull request that can be approved by all affected features’ teams. This will save a huge amount of time, every 6 months.


New App Concept

Parallel to our efforts to develop Senses 2.0, the UX department is working on a new App concept. The App will get a serious makeover. New design, new look and feel, new structure. The changes will be implemented in so-called ‘waves’. The product manager in charge is planning to start multiple waves sequentially in which the App will slowly but surely change its face.

Early last year, we made a very small change to our App -or so we thought. We wanted to change the background colour from Rabobank-blue to Granite-grey. Back then, I had just started working for the Rabobank, and I thought this was probably the easiest change we could make.

Similar to the issues we have in rolling out incompatible Platform releases, changing the design also causes a kind of a ‘breaking change’. Changing a background colour is something you do in the entire App, or not at all. So to my anguish, the change in background colour took more than 3 months to complete.

And also in this case, if we’d have all code in one repository, you’d get a task force to make the global change in design, colours or whatever, let them prepare a pull request, and the affected features’ teams can approve the change.

We expect quite a number of ‘design waves’ to arrive in 2019, and this is going to be very hard to coordinate with our distributed setup.


Automated tests

At the Rabobank, we work in independent DevOps teams. The teams operate with a high level of independence, and can therefore make up their own rules. How they organise their work is up to them.
The retail banking application -both the App and the browser version- is an effort of many of these teams across different departments. Although it all started at the Online department, these days there are teams from Investments, Payment & Savings, Insurances, Mortgages, all developing their own features for it. I think it’s great that we’re organised this way, and that teams have a high level of independence. I see only one real issue that comes from it: there’s no standardisation regarding testing. Even regarding this, teams make up their own rules.

In practice, you see that there are very mature teams that take testing very seriously, but also teams that have hardly any meaningful tests. As such, the aforementioned breaking changes in platform cannot be reliably rolled out by the platform team itself by simply running all feature teams’ tests. That just doesn’t prove things will work.

I’ve heard the argument that the differing levels of test automation is actually a good thing, as teams are in various stages of maturity, and therefore we should not expect the same test coverage from all teams.

One of our goals is to achieve Continuous Delivery (CD). The only way we can achieve that, is when we can rely on our automated tests to prove we don’t have any regressions. There is no excuse regarding maturity of teams. By now, everyone should know that testing is a vital part of software development, and an essential step towards CD.

Within a Monorepo, we can enforce rules regarding minimum test coverage, making sure all tests pass before a pull request can be merged, and so on. We can actually do Continuous Integration on all parts of the application with every change that happens. With our current, distributed setup, only the isolated parts can be tested (if there are tests to begin with), but there’s no way to run our integration tests reliably for all code at once.

(one of the upcoming articles will be entirely about how we want to approach frontend testing)


Going across features

Features are considered to be isolated components. They should depend on platform code, but nothing else than that. However, more and more we see the need for features to be part of a Customer Journey, which is typically a process the user follows from feature to feature. One feature finishes, and another one picks up the results of the first, and so on.

This is another scenario that our distributed setup does not facilitate. Since one feature lives in one Git repository, checking if the flow from one feature to another works is only possible when you test the entire App. The only way to test the entire App, is to install it on our Acceptance environment, which is usually the last step before going to production. That’s very late in the process to check if the basic flow of a user works.

Within a Monorepo, you have access to all features at all times. You can easily check a flow from feature to feature, page to page, because it’s all there, right from the beginning.


Sounds good, but…

Sure, there are many arguments for having a Monorepo, but there are also a lot of people who are not so enthusiastic about the idea. Below I want to address some commonly heard arguments why it’s not a good idea.

This violates our Decoupled architecture principle!

One of our principles within the Rabobank is to make sure we have a decoupled architecture. We want systems and teams to be able to operate without dependencies on each other. So how does a Monorepo fit within a decoupled architecture?

First of all, having all code in one physical location does not necessarily say anything about the architecture. Features are still isolated components. Thanks to Nx, a set of tools we’re using to manage our Frontend Monorepo, we can enforce rulesets that make sure one Feature Component does not depend on anything else than the Senses platform code, just like in our Distributed setup.

By doing so, we can ensure a Feature developed for our Retail banking application could be used in our Business banking application — if we should so desire. It’s an isolated component, only dependent on the platform, so it will run on any application built on that platform.

So in my opinion, we still have a decoupled architecture, but we just store all the code in one place.


I don’t want to fix other team’s tests!

This is an argument many people have used. They envision the scenario that team A is working on a feature, which breaks its tests, and team B works on another feature, wants to go to production, but since team A has broken tests, they either need to wait for them to fix it, or fix it themselves.

This is actually a non-issue. The way it works is as follows:
The main branch is always in a releasable state. All tests of all code will pass in the main branch. When a team starts development, they branch off this main branch, do their work, run only tests that are affected by their change, and submit a pull request. A pull request can only be approved by human beings after a Jenkins job has run all tests again to verify everything still works, and actually will approve (or decline in case of failing tests) the pull request. This ensures the releasable state of the main branch. Other teams, simultaneously working in their own branches cannot interfere with a team’s efforts to bring their feature to production, as their tests have to pass too before it will be merged to the main branch.


Testing will become slow!

With so much frontend code in one repository, and an increased focus on coverage, there will be so many tests, that testing will become slow. I think this is only partly true. First of all, Nx helps with this, by allowing to only build and test code that is actually affected by a change. You can run ‘affected’ tests locally, but also on each pull request. This can be done for Unit tests, but also for End to End tests. This is a great way to make sure you don’t run all tests, and still be sure that everything works.

The part where it will actually be true, is when we would need to run all tests, regardless of affected code. For now, I only envision us doing that nightly, just as an extra security measure. If that takes too long, we can take steps to run tests in parallel.


This is going to be chaos!

That’s a remark I cannot really address, because I don’t really know where the chaos is expected. With the right tools in place — making sure nobody can directly commit to the main branch, Nx helping to enforce the isolation of features and running only affected tests, and Angular Schematics generating boilerplate code that ends up the right way in our directory structure — I foresee a totally different, but still very structured way of working.


Conclusion

The conclusion is that there is no conclusion yet. It’s obvious I’m very positive we’ll make this work, but at the same time I’m keeping an open mind and we’ll have to conduct the Pilot to see what we run into, and how we can solve issues as they arise.

After we’ve evaluated the Pilot, I’m sure I will get back and write an article on our learnings.

Developing Senses

All about the Senses 2.0 Frontend Platform of the Rabobank

Jouke Visser

Written by

Solutions Architect Frontend for the Rabobank Online Platform. As such developing a new Frontend Platform based on Ionic 4, Angular 6, ng-redux and Nx.

Developing Senses

All about the Senses 2.0 Frontend Platform of the Rabobank

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade