Following an Internal Open Source Model of Software Development
At Namely, we have implemented an internal open source model to improve our velocity, ship more features, and fix more bugs. We use this model in a couple of ways, the most important of which is to build microservices. We also use an open source mindset in the way most people are probably familiar with: publishing and sharing libraries. For .NET, we use an internal nuget feed (Nuget is the package management system for .NET). We also have internal Ruby gems and common Go packages used for our services. For this article I’ll be talking specifically about our .NET efforts. However, this will be broadly applicable to other languages and frameworks.
Using Internal Shared Libraries
There are a couple problematic models for writing internal shared libraries:
1. One team writes all the shared code but doesn’t ship anything that uses it. All teams use the codebase. This method often results in a codebase that doesn’t solve the problems that product teams are facing. The code is challenging to use, as it is often over genericized or deeply flawed in ways that are only obvious once you start using it.
2. Everybody theoretically owns the shared codebase, but only a few people (many times one person) actually contribute to the repo. Like a baby open source project with one maintainer, this shared code solves the problem that the owner needed to solve. But, without input from other teams, it won’t be solving their problems and will therefore not be used. This model often devolves into separate shared code bases for different teams or a shared library that doesn’t get updated in years, especially once the one person who was working on it leaves the company.
We started our shared library with the second model above, and in fact, we started multiple separate repos for shared libraries. We eventually realized that the libraries weren’t being contributed to by enough developers, and as a consequence, they weren’t as useful as they could and should be. Furthermore, new versions of the libraries were difficult to push because developers had to package and push to our internal nuget from their local machines. We felt it was vital to fix these problems, and we looked at how successful open source projects operate. We felt by adopting some of their practices, we could make the library better and increase adoption across the organization.
We consolidated the libraries into one repo with an easy CI/CD process for deployment. Perhaps unintuitively, we also made it harder to get a PR merged — we added the requirement for a second reviewer and instituted a soft policy of requiring at least one reviewer from another team. We also adopted a process to drive code quality by strictly adhering to Semantic Versioning of new releases. This means we require new large features to go through a pre-release/review/iteration cycle where other teams actually use the feature and provide meaningful feedback on its use before we merge it to master.
That sounds like a lot of process — and it is. The point is to make sure our shared code is well tested, understood by other teams, and has appropriate extensibility points. Because it is a lengthy process, we typically only do this for larger features. When writing bug fixes or smaller features, a developer will submit a PR directly to master and deploy once merged. We still require sufficient test coverage and multiple code reviews, but nothing else. The point is to get code to our colleagues that is well thought out and tested, not to tick some boxes for the sake of process!
Since this is an internal open source project, all devs are allowed and encouraged to contribute to the repo. As an example, one of our Go developers was working on a feature that every service at Namely should utilize. They submitted the PR to our shared repo (one of the first times they had ever written C#!) and got some meaningful feedback from myself and a couple of other devs. Once it was addressed, they were able to deploy the package to nuget, and now all our .net services have the feature! All these changes have proven successful, as every .net team (and one Go team) has contributed at least one feature to the shared codebase.
Using Open Source Practices to Write Microservices
In addition to the shared library, we use an internal open source mindset to develop new microservices. At Namely, we have not one, but two large monolithic code bases that we’re actively working to break up (you all agree that is the correct architecture for a cloud platform, right?).
We have faced numerous challenges as we rebuild our platform using a modern architecture, but one of the hardest questions to answer is: What do we build first?
Our product requirements drove the first few services — we had new features to implement, so instead of adding them to the monoliths, we added them to independent services. We’ve organized our services into verticals based on the business entity. For example, my team owns the service for vertical A, but we depend on data from verticals B, C, and D. Other teams own these services. In the monolith, the data we needed was in the database, so we would simply query the appropriate table. In the microservices world, each of those verticals has a service that owns the data we need.
So, we are in the monolithic world, building Service A, but we need data from other services. But they don’t exist yet (or if they do, they don’t yet have the endpoints we need). We could wait for the other vertical owners to prioritize our needs, or we could write it ourselves.
This is where the open source mindset really helps us. Instead of waiting for vertical owners to build the endpoints we need, my team can build the endpoints we need with their input. Namely uses GRPC and protobuf to define the service contract, so the first step is to define the new endpoint contract in a proto. My team will create a PR with the endpoint, and the vertical owner will review it with a critical eye towards the future state of their service. The goal is to build the service the correct way initially, not just relieve the immediate needs of our service (and accidentally create new tech debt for another team)!
Once we’ve merged the proto, somebody has to build the feature. If the other team has already prioritized the work (or has available bandwidth to fit it in), they’ll implement the new method. If not, my team will build the endpoints and submit PRs to the other teams’ repos. This allows the other team to ensure the code is up to their standards, as well as allowing them to understand the changes. Since they’re going to own the code after we merge, they’re invested in providing appropriate feedback on our PRs and rightfully are very strict when it comes to demanding full test coverage.
This has allowed my team to continue shipping new features for our product in the microservices world, while not requiring other teams to focus on our feature. Likewise, other teams can and do submit PRs for our services, and we don’t have to devote too many resources to unblock them! The open source mindset has allowed us to keep our velocity up without blocking, or being blocked by, other teams.