Leveraging Polylith to improve consistency, reduce complexity and increase changeability

Felix Barbalet
Engineering the Skies: Qantas Tech Blog
8 min readJan 16, 2024

Whilst you might not know “what” Polylith is, I’ll bet that the factors that influenced its adoption as our source architecture — the “why” — will be very familiar. This story explores the driving forces of code changeability, consistency and complexity and how Polylith helps to shape them.


As technology systems grow and age they accrete; they get bigger and accumulate stuff. Much of that will be valuable, but over time as the world and requirements change, parts of our systems will grow out of date.

It’s a truism that we need to be able to change our systems. Ensuring systems are easy, or rather, simple to change — that they are “changeable” — is a key objective for any system owner.

So then, what makes it simple to change a system? What is it that makes one system more changeable than another?

One of the many insights contained in the book Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations is that loose coupling (aka modularity) is one of the key enabling factors observed in high performing technology organisations (see pp61–64).

Loose coupling is important because it is a key requirement for things to be changeable. Things that are less coupled require less coordination or overhead to change because any particular change is isolated and small.

On the other hand, in systems that are tightly coupled — where everything is intertwined or connected — a small change in one part of the system results in a ripple effect. When one small change results in the entire system changing, it requires larger scale coordination (which slows down the pace of change) and increases risk.

So in this way, modularity — or how loosely coupled the parts of our system are — determines how changeable our system is.


Microservices are a type of modularity in the sense that a microservice could be viewed as a module within a broader system (of microservices).

While microservices are synonymous with modern infrastructure and software engineering practices, they bring their own challenges. Coordinating changes across multiple microservices can be tricky and keeping things in sync requires significant effort.

Like many corporate technology environments, Qantas has embraced a microservice architecture in many business areas. In my team, we have responsibility for more than 20 microservices that run parts of the Hotels and Holidays website for Qantas and Jetstar.

We’re not a large team though and one of the consistent challenges we had was keeping all of our services up to date and in sync.

Our services have been built over a number of years (evolved you might say) and there was not an overarching approach, nor strong conventions that guided how they were built. This resulted in a high cost of switching context for our software engineers as, more often than not, they worked to make coordinated changes across multiple services.

The opportunity

We had a large piece of work on the horizon which we could see would benefit from improved consistency and as we explored how we might deliver it, we started discussing how we could improve both consistency and productivity at the same time. This made sense because the work was going to require changes that were spread throughout multiple key services and while we were working on those changes we could make other adjacent improvements.

I’m lucky to be part of a team which regularly discusses ideas for improvement and has a huge breadth of expertise. Polylith was on the radar for a few of us and as we discussed it we increasingly saw the potential for improving the way we build and manage our backend systems.

But what is it?


Polylith is a set of conventions and some smart tooling that has given us a new approach to software architecture:

  • It has improved the consistency of our systems;
  • It has given us better tools that we’ve used to simplify our systems;
  • It has given us faster feedback loops; and
  • It has made it easier to refactor (and therefore improve) our code.

All of which has improved our efficiency and effectiveness at delivering business value.

I won’t try to replicate the excellent material that’s on the Polylith website, but if you have a few extra minutes, I’d recommend James Trunk’s easy to digest introduction to Polylith. If you have 20 minutes, Joakim Tengstrand‘s The origin of complexity is a detailed introduction to the philosophy behind Polylith.

If not, keep reading for some examples of how we’ve used Polylith and most importantly the benefits it’s already delivered.


Even in a microservices architecture, it’s likely that there is significant opportunity and benefits from sharing code between and across those services.

In our case we had common business parameters shared across multiple services and updating those parameters had previously been error prone and incredibly resource intensive.

Leveraging Polylith, we refactored the common parameters into a Polylith interface and reused that single interface throughout our codebase. Now we have one place where we can change the parameters consistently and have the changes deployed into production immediately.

But it wasn’t just consistency of our codebase that Polylith has helped us improve. When we picked up Polylith and started moving our microservices into a Polylith repo, we needed to refactor our CI/CD processes too. Despite an initial cost, this has ended up being a strong unexpected benefit of adopting Polylith. We’ve embraced opportunities to improve the consistency of our supporting tooling in the same way as we have opportunities for improving the consistency of our code.

On the flip-side, one early pain point the team felt was we all had to figure out patterns in our IDEs to deal with the new paradigm including a larger codebase and more deeply nested folder structures. In the scheme of things, however, this has proved a small cost and we now have a playbook for most major development environments that we can reuse for onboarding new software engineers.

With improved consistency we’ve seen additional second-order benefits such as reductions in complexity across our services and less context switching costs working across services. We’ve also seen improved ability to scope our work and answers to questions that rely on examining our codebase are more immediately evident because the code is all in one place.

Better tools

Polylith is a few things. It’s a set of conventions about how to compose software components (aka bricks) but — at least for the Clojure language that my team uses — there is also Polylith tooling which is built to help developers implement and leverage the power of the underlying conventions.

This tooling continues to improve and because it’s open source and extensible, we’ve built on top of it. For example, we run our CI/CD pipeline on Buildkite and run our test suites using Kaocha. We’ve been able to extend the Polylith tool to generate CI/CD pipeline runs that leverage Polylith’s incremental testing approach and we can reason about which parts of our system have changed and run the relevant test suites focussed on those changes and only those changes.

You may already be familiar with Clojure, but if you haven’t heard of it and you like the concepts you see in Polylith, I’d recommend checking it out.

That’s because the success of Polylith relies on having a system where each component or module has well defined boundaries and follows the encapsulation principle (c.f. functional programming). In fact many conventions in Polylith (which concerns itself with systems level concepts) have analogs with functional programming concepts that apply at the lower program level. Hence for teams already working in Clojure across multiple systems, Polylith is an excellent fit.

For those teams not already working in Clojure, there are similar tools and concepts in other languages. For example, teams using Java/Spring could leverage some similar paradigms using Modulith.

Faster feedback loops

As you’re reading about Polylith, the question might arise: why not just use libraries? Libraries are the unit of encapsulation in every modern software language. How does using Polylith improve upon using libraries to share code, functionality and business logic between systems?

It might not be obvious at first (it wasn’t to me at least), but libraries add significant friction when making changes to a system of services. They provide a hard boundary and there is significant overhead to updating a library, updating the version referenced in the dependent services, testing the changes and redeploying the affected systems.

I recently read Hillel Waynes’ The Crossover Project — a series of essays exploring the similarities and differences between software engineering and other forms of engineering such as mechanical or chemical. Hillel uncovers that one of the biggest benefits software engineering has over traditional engineering — as identified by engineers who’ve worked in both professions — is fast feedback loops.

Fast feedback loops is also one of Clojure’s superpowers — the ability to try something and immediately see the result. It’s what drives innovation and as software engineers, we most often are trying to achieve things we haven’t done before — the very definition of innovation.

Having a fast feedback loop makes us more productive and happier!.

Polylith has given us tools that allow us to make changes across system boundaries and rapidly understand the impact of those changes for the system as a whole. No longer do we have to reason about our system on a piecemeal basis, we’re able to make bigger changes more rapidly and with increased confidence about the result.

Improving the quality of our systems

As we move more of the services in my team into Polylith one of the emergent behaviours we’ve observed is that our system is beginning to resemble the philosophy of Polylith.

A summary of Joakim Tengstrand’s The Origin of Complexity

The diagram above summarises the philosophy behind Polylith and it also neatly summarises the consequences we’ve seen in adopting it. With better tools and conventions, Polylith has helped us make our systems less tightly coupled while also improving their deployability and testability.

The loose coupling, in my opinion, is a result of both making our system more coherent (the “increase usability” from the above diagram) and no longer relying on brittle indirection for cross system concerns in our codebase. Polylith gives us a few simple conventions that work well and allow us to compose our services in the same way we compose functions and code.

Polylith’s tooling and conventions have also made our systems more deployable and testable. Through more consistent CI/CD processes where an improvement in one place is felt across our collection of services — where previously, we would have to make the same change in many places — to the wisdom we’ve gained in refactoring our CI/CD process to generalise across all of our services, it’s now much easier to see what might be reusable and what’s most important.

Final Words

I won’t pretend we picked up Polylith expecting we would have made the progress we’ve made. The past year has been a huge learning curve for me and my team. But looking back now, I can say with confidence we’re glad to have taken the risk: it’s paid off and we’re excited to see how we can leverage this new approach to continue to improve our systems. It’s also made us more confident to explore other improvements in our systems to fight complexity and increase changeability!

Information has been prepared for information purposes only and does not constitute advice.