You too can love the MonoRepo

Just plug the frontend into the backend, and it’s all integrated!

I’ve worked at Google for 9 years now, and I accept the monorepo as a way of life. You can read more about how Google does source control here: https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext

What’s a monorepo? Imagine if every time you needed to make a new source control repository, you just made a new top-level folder in your current repository instead. You’re doing monorepo! All the software written by everyone in your company stored together, versioned together.

@IgorMinar and I go to conferences (like AngularMIX last week), and tell people how Angular uses a monorepo on GitHub, and that we like it so much we’re considering how to move our satellite projects like Angular Material and Angular CLI into that repo as well. We tell them that they should do this too. They look at us like we’re crazy! Sure, we might be, but I don’t think this is a crazy suggestion. It can be tough to implement, but let me explain why a monorepo is awesome, and maybe even suggest how to make it easier to switch.

Jeff Whelpley wrote about the problem of sharing code as your org scales. https://medium.com/@jeffwhelpley/the-problem-with-shared-code-124a20fc3d3b He has a lot of good points there; I’ll reiterate a bit with a different emphasis: continuous integration.

Releasing a shared library

Owning a shared library is hard. You establish a public API and try not to change it, because breaking changes are really hard to push into users codebases. Your users put off the cost of upgrades, even easy ones. If you make a breaking change, users will delay upgrading, so how do you roll it out? If a user finds a bug, will they ask you to back-port the fix into a patch release on an older version? Then you discover that users don’t actually respect the public API, instead as the number of dependents grows they bake in unintended assumptions that any behavior of your software will remain the same (it’s been dubbed Hyrum’s Law: http://hyrumslaw.com ). Changes you thought were non-breaking can cause the same problems. You can easily find your job taken over with release engineering tasks.

Release engineering is all this careful work of tagging versions of various systems such that they work together. It’s part of the umbrella term “integration” — mixing streams of changes across several systems and seeing what happens. Many enterprises have processes where a QA or staging environment mixes all the parts together, where they bake for a week or longer. You find lots of bugs here, and they’re expensive to fix. That’s due to several factors: the bad change was made long ago, the QA environment is unavailable for other purposes until all the integration is complete, and you are interacting with parts of the org where your communication mechanisms are less effective at a distance.

A key consequence of a monorepo is that release engineering doesn’t happen anymore, at least for dependencies between two systems in the same repo. Because all our code is in one repo, Google has no shared QA/Staging environment! The monorepo changes the way you interact with other teams such that everything is always integrated. And hey, our industry has a name for that: continuous integration. If you don’t have a monorepo, you’re not really doing continuous integration, you’re doing frequent integration at best. In a monorepo, even pre-commit testing is already integrated.

The difference is profound. When we make any change to Angular, we need to sync this into Google’s monorepo. Doing so means that every Angular user immediately gets the change. Every commit is a release! Read that sentence again. Did you really read it again? Every commit is a release! That requires that we get really good at doing Continuous Integration: we run the user’s tests to make sure it’s safe to release Angular many times a day. This also keeps us honest about breaking changes. We pay the cost of upgrading users at the moment we land the change. Doing that requires we either use the great tooling we’ve built at scale, or narrow down the scope of the breakage (or both).

TBH: there are some hard things

First, your tools probably make assumptions they shouldn’t have. Oops! They assumed the repository is one-to-one with a team. That affects who should review code before granting permission to commit changes, grouping bugs, notifications about changes, and especially what to build and test when a change is made.

Let’s start with permissions. You need to declare who owns what code in the repository, at a finer grain than “the whole repository”. GitHub supports this now, see https://github.com/blog/2392-introducing-code-owners. We have an equivalent system in Google’s monorepo. You check in some file specifying the ownership of subdirectories, then changes to those subdirectories require review by an owner before you can merge them. This works great, and ties right in with the original intent of Pull Requests — you are proposing a change to some code you don’t own, in a way that’s convenient for the owner to review and accept. Angular uses PullApprove.com to do the same.

The next problem is bug trackers, which assume that a “project” is one-to-one with a repository. GitHub doesn’t have a great answer for this — when Angular team triages issues, we add a label like “comp: my project” and then we can filter searches and dashboards with that query atom. This works okay for us, but it’s not ideal; for example a user filing a bug can’t propose a label so we have to triage everything globally. Internally at Google we have a similar mechanism to label things with components.

The notification problem is really unsolved. If I ask GitHub for notifications on the entire Angular repo, I get a firehose. I assume that someone will ping me if an issue is important and requires my attention, but most GitHub users assume that I’ve already been notified. I’d love some eager contributor to find a solution for this. (Maybe there’s just some GitHub setting I’m missing)

I hope you’re still with me, and sitting in a cozy chair, because I’m only now getting to the part I’m passionate about. The objection I heard at AngularMIX is that we don’t have enough resources to run our CI system with so much code at once. I’m so glad you asked!

Continuous Integration

If you make a change, what should you build and test? Ideally, anything that depends on the code you just changed. In fact, the Angular team would like to run your tests when we make changes (email devrel@angular.io if you’re game for setting that up for your enterprise).

A naive build system does the same work every time you run it. If you ask it to build and test, it will build all the code in the repository and run all the tests. And it does this each time you make a change.

However, the daily resource demand for naive CI is theoretically quadratic O(C × T):

C is the number of changes committed by all engineers per day
T is the cumulative resource requirement of all the tests

A monorepo increases these factors a lot. On your team, C is close to constant (you don’t grow headcount that fast) but it’s linear or exponential in a big company. T increases slowly on your project, (and you have an incentive to keep your tests fast) but across the whole company, you might be running many other teams’ tests that are more resource-intensive.

The trick we use at Google is to rely on our build tool, Bazel. The build tool can be asked “what depends on this code” in a way that scales: it only requires analyzing the static build configuration across the monorepo. Even for large repositories, you can do this in one Bazel instance with enough memory. That means the requirements on your CI system have a new factor:

X is the “connectedness” of the graph — how many tests depend on the average change. A value of 1 is fully connected (a change affects everything) and 0 is fully disconnected (no code dependencies).

And the new resource demand function is something like O(C × T × X). That’s much better, because X is probably close to zero.

Note that X might not decrease over time, so overall demand is still quadratic. At Google we measured this once, and found empirically the demand seemed to be quadratic, but there was a rise of robot changes in the same period and no one did proper science on it to see if X was actually constant.

But what you pay for with the slow-quadratic growth is totally worth it. Today you integrate changes infrequently, after you cut a release and most of your users upgrade. Adopting a monorepo means every change can be integrated. If you believe in Continuous Integration (a requirement for Continuous Delivery) then you should seriously think about using a monorepo, and a build system that helps you keep up with the greater amount of testing.

Appendix A

If I’m writing an appendix, does that mean I’ve gone too long? Probably.

I want to point out that Bazel doesn’t require using a monorepo. Bazel understands multiple “workspaces”, and you have several options for fetching all the workspaces onto the same machine. As long as you can get all your multirepos layed out this way, then you can still ask Bazel which tests are affected by a given change like I describe above.

I’m told there are CI systems that understand multi-repo as well (eg. Jenkins) so you could also make Bazel trigger all those affected tests for any change in any of the repos.

You could also arrange a system for applying the same tag across all the repositories, giving you a way to snapshot the known-good integrated state across them.

In the end, you could make multi-repo do everything a monorepo does by forcing engineers to fetch all the repositories into a given layout on disk. But now you just have the same layout you would have had in a monorepo. I imagine that this multi-repo CI approach is more work than the workarounds I listed above for dealing with downsides of a monorepo.