What I was doing wrong — dependency management and monorepo adoption

Vadym Barylo
CodeX
Published in
5 min readDec 3, 2021

Supporting multiple dependencies in a distributed architecture is a big challenge especially when Conway’s law predicted further team’s isolation based on produced architecture.

Nowadays micro-services are the golden standard in front-end and back-end solutions. Nano-services become also very popular for rapid business value delivery. This dictates even more atomicity in delivered software and increased demand in code-reusability. So splitting macro to micro and micro to nano creates really complicated dependencies graph and exponentially increases efforts to support dependency management when core layers are not yet fully stabilized.

Chain of reaction

I wouldn’t have mistaken here concluding that microservices adoption primary goal — reduce coupling and increase cohesion, so each building block can be easily supported and delivered by its own team, self-sufficient from the responsibility it serves, and independent from the usability purpose.

The complete UI solution can have the next view:

  • it consists of a set of independent applications that can share its authN/authZ state through SSO
  • each application consist of a set of sub-projects (mostly each top-level route point to a dedicated sub-application), that we used to call “micro frontend”
  • each application consist of a set of modules (some of them like AUTH module usually is common for all root UI applications)
  • each module consist of a set of third-party or home-grown libraries

Pursuing a goal to be independent in feature development and support own release cycle for specific components — usually, each block in this graph is moving to its own repository with its own release constraints.

So far so good, until you need to upgrade the lowest library in the dependency tree:

  • code changes in “library” requires producing a new artifact
  • a new version of the library need to be propagated to all owned modules and upgrading its versions
  • new versions of all modules need to be propagated to all owner components
  • ….

Depending on the number of layers particular product has — this is a painful and time-consuming process (especially if the CI process includes a comprehensive verification process) — rebuilding each layer artifact can take significant time.

Migrating to monorepo

Looking for an efficient solution in this dependency management hell — we found out that the only acceptable option was to move back to support a single repository with all linked dependencies (similar as we did for back-end dependency management).

Surprisingly market already also was ready to propose proven solutions to mitigate this problem in a more effective way and big consulting companies already did investigation its effectiveness

So we took NX workspaces as an experimental tool to adopt monorepo approach in our organization.

The first step is to describe the workspace, it can look as:

{  
"version": 2,
"projects": {
"products": "apps/products",
"products-e2e": "apps/products-e2e",
"users": "apps/users",
"users-e2e": "apps/users-e2e",
"core": "libs/core",
"react-ui-components": "libs/react-ui-components"
}
}

Not regard how many libraries can be used by different units of monorepo — it’s easy to define scope and boundaries for a particular root to behave as a single unit (e.g. tests all libraries and owned applications when changes occur somewhere in this scope).

Then we created a separate library for each dependency and linked repo as a sub-project using git submodules.

Scaffolding new lib is as easy as calling nx generate @nrwl/angular:lib auth-core . There are many types of generators (starting from primitive node modules ending with complex projects).

Benefits gained after adoption:

  • all reusable dependencies are still reusable dependencies and can be individually released on their own (so dependent products didn’t experience any difficulties during this move) — each can have its own release/verification procedure (this calls “target” here)
{
"root": "libs/core",
"sourceRoot": "libs/core/src",
"projectType": "library",
"targets": {
"build": {
"executor": "@nrwl/web:package",
"outputs": ["{options.outputPath}"],
"options": ...
},
"release": {
...
},
"test": {
"executor": "@nrwl/jest:jest",
"outputs": ["coverage/libs/core"],
"options": ...
}
}
}
---------------------------> nx run core:testTest Suites: 23 passed, 23 total
Tests: 72 passed, 72 total
Snapshots: 0 total
Time: 49.06 s
Ran all test suites.
> NX SUCCESS Running target "test" succeeded
  • at the same time, we as owners and main adopters of these modules can directly access them without publishing somewhere, so changes to the core can be reflected immediately in the layer it uses
{
"compilerOptions": {
...
"paths": {
"@core": ["libs/core/src"],
"@react-components": ["libs/react-components/src/index.ts"]
}
}
}
----------------------import {MinValueValidator} from "@core/validators";
import {PlatformSection} from "@react-components/containers";
  • as all dependencies are now local sub-projects in the main project — we can configure their own dependency graph and the project build process will be sensitive to the dependency order (appropriate libraries will be built beforehand root project will be built)
{
"build": {
"executor": "@nrwl/web:webpack",
"outputs": ["dist/apps/products"],
"options": {
"index": "apps/products/src/app.html",
"main": "apps/products/src/main.ts"
},
"dependsOn": [
{
"target": "build",
"projects": "libs/react-ui-components"
}
]
}
}
------------------------> nx build
> NX Running target build for project "platform" and 2 task(s) that it depends on.
  • each dependency can have its own release/verification procedure (e.g. for the core library only unit tests are enough, but for UI we need to inject storybook and run UI tests), so atomicity in library management was not violated
  • even being mono — we have a well-organized dependency structure and this structure can be easily visualized by calling nx dep-graph
Visualize dependency graph

Generator as a nice bonus

Rapid development is possible when your namespace is organized with maximized code and practices reuse. This dictates specific rules on how to organize your application source structure, abstractions to inherit from, naming patterns, etc.

Once you have rules defined and applied — you can easily describe them as a code through a code generator. Fortunately, NX comes with build-in generator support, so creating new pages, libraries, modules can be automated, e.g:

nx workspace-generator lib-generator auth-core

So created once, structured code can be moved into schematics for further re-use, and generated module can be injected directly into the owned project and published to package repository for linking in legacy solutions.

--

--