Microapps — the good, the bad and the ugly

Ania Mierzwinska
EcoVadis Engineering
9 min readMar 29, 2021
geometric shapes with light
Photo by fabio on Unsplash

Microfrontends (MFE) have been on the rise in the past few years as a welcome alternative to monolithic architecture. Below, you will find a description of the path we took at CyberVadis to add microapps (one of MFE flavors) to our existing frontend architecture.

In this article, we’d like to present our motivations and describe individual stages of the migration process as well as decisions, better and worse, we’ve made on the way. We hope some of you may find it useful when working on your projects.

Additionally, we’d like to share a recipe for a full automation of versioning of internal npm libraries that can be seamlessly integrated in your CI/CD. This is just one of many cool discoveries we made on our journey towards microapps.

Starting point

CyberVadis offers a platform for conducting cybersecurity audits and for managing the results of these audits. Both the company and the platform are relatively young and developing at a rapid pace. Back at the time when we started considering architectural changes, we had several frontend developers in total, working on two separate teams.

The frontend part of the platform has been developed from scratch as a single monolithic app. Backend has been developed based on microservices architecture right from the start.

The diagram below illustrates our platform architecture at the starting point:

  • frontend: a single monolithic Angular application (new features are developed by adding new Angular modules)
  • backend: domain-based microservices
  • infrastructure: the Azure cloud

The platform was relatively stable and working fine. However, with time, despite a rather small team, we began to feel the pain of issues typical of a monolith:

  • adding a new, originally unplanned feature or refactoring often required changes across the whole app
  • merge conflicts happened more and more often
  • diluted ownership of teams for individual parts of the app
  • the choice of tools (libraries) had a global impact on the app
  • a bigger project means a higher learning curve for new developers

As the above factors intensified, we became increasingly interested in trying out a microfrontend-based architecture as a potential solution to our ailments.

What are microfrontends anyway?

At this point, it’s worth noting that microfrontends are not a single, clearly defined architecture, but rather a concept that can be realized in multiple ways. If you’re interested to know more, this article features a nice breakdown of possible microfrontend flavors.

The first, key step was to decide on a solution that would be the best fit for our workflow. A classic approach to MFE, which involved multiple frontend apps on a single view, was not quite what we wanted. After analysing possible options against our requirements, we concluded that microapps are the way to go.

Microapps FTW!

What are microapps exactly? This flavor of microfrontend features small-scale apps, developed and run fully independently, deployed on separate urls. Such apps are interconnected simply using links.

The microapp approach turned out to be more of an evolution than a revolution. We could leave our monolithic app untouched and implement new features as separate microapps. With time, we could also start migrating existing features to microapps. Backend structure and our auth flow (based on OAuth2.0/OIDC) should also make our work easier. The learning curve would be limited to familiarizing ourselves with new frontend frameworks.

The microapps flavor offers most of MFE advantages:

  • parallel development of the platform by independent frontend teams is possible
  • easy refactoring
  • easy replacement of tools as needed
  • migrating of legacy systems is easier — you can extract individual features to new microapps one step at a time
  • potential freedom to select any frontend framework for new apps (an option that we eventually decided against, as explained later on)

In contrast to client-side composition MFE flavor, with microapps there’s no need for an additional framework to bring all the components together (such as SingleSpa or Bit).

Another huge benefit from our point of view was the possibility to integrate seamlessly with our monolith: because microapps are fully independent, the integration with a new feature boiled down to adding a link to the microapp (after including microapps in our auth management system).

Naturally, the above setup was the best option for our project and company at the current stage of development (meaning small scale of the project, not too big teams, good communication).

Implementation phase

A word of explanation: microapps offer total freedom to choose any technology for individual apps in the system. But such freedom comes at a price: competence gaps may appear and widen within the dev team if the stack grows uncontrollably (deterioration of bus factor). Reusing components or logic among apps becomes more complicated when multiple frameworks are involved.

In view of the above, we decided to start small and limit our microapp stack to the following:

  • React + TypeScript
  • Redux (for more complex apps)
  • axios
  • Material UI, StyledComponents
  • Jest, React Testing Library

With a unified toolset, shared parts of the platform were easy to manage. Reusable components and logic were extracted to internal npm libraries, for which we used the following mix:

  • TSdx
  • React
  • automated versioning (semver, semantic release)
  • Storybook

In the end, our frontend architecture looked as follows:

  • the Angular app
  • React microapps (for developing new and migrating existing features)
  • shared components and reusable logic extracted to internal npm libraries (TSdx, Storybook)

Reality check

So how did it all work out for us in practice? Which decisions turned out to be spot on and which not so much?

Microapps have met an impressive part of our expectations. Above all, the development of small and independent projects has turned out to be fast and sweet. Feature ownership of individual teams has become clearer which also boosted development speed.

However, the above is true at the stage when a microapps system is fully set and the first elements (apps themselves, infrastructure and shared libraries) are up and running. Arriving at this stage took us a fair bit of work:

While individual steps on the way were not too complicated, there were many of them. We had to learn the new frontend stack (React), configure it, learn to create and apply npm libraries and, last but not least, take care of the infrastructure needed. All of that added up to a significant complexity which, in turn, slowed us down in early stages of the process.

Taking care of infrastructure turned out to be a whole separate project. Microapps are 100% independent which means independent infrastructure for each app. In our multifunctional team, this translated to a significant investment in devops knowhow on the part of the frontend team.

There were also some issues that persisted beyond the teething problems stage. Duplication could not be fully avoided, despite extracting a large part of shared components and logic to libraries. Apart from code duplication, including such core functionalities as e.g. translations, http handling, various configurations had to be repeated over and over again.

Regarding splitting responsibilities among the apps, our current structure is more feature- than domain-based. At the moment, this is not a blocker, but it is definitely an area where we would consider changes in the future.

1.5 years later

From the perspective of the past 1.5 years developing and maintaining microapps, we may try to answer a few key questions:

If we had a chance to change the past would we decide to go the microapp way again?

Yes, definitely!

What are the biggest benefits we gained?

  • at the risk of repeating ourselves, flexibility and clear ownership have repeatedly proven to be among the greatest gains (enabling parallel development)
  • knowledge (!) we acquired in the transition process — apart from hands-on competence in two frontend frameworks, we now have a frontops team (frontend + devops :) )
  • simplified migration of legacy features (one feature = one microapp)
  • simplicity of microapps as measured against some other MFE flavors

What would we recommend if you decide to go the microapp way?

  • Choosing the right time. Microapps are a great solution if your requirements include independent parallel development (that is a large number of developers / teams working on the same project). If your business is not yet at this stage, implementing microapps may result in a slowdown of development work, but the benefits may not be consumed for a long while. In contrast, if your product or company are headed for fast growth, microapps are definitely a path worth considering.
  • An iterative approach to problem-solving. Rather than aiming for instant perfection, make liberal use of proof of concept projects and spikes.
  • Extracting shared code to internal libraries.
  • Automation (testing, CI/CD, basically everything you can)

Bonus! 🎁 Our recipe for semver automation

Last but not least, we would like to share with you one of the many cool outcomes of our path towards microapps. Specifically, we will show you how to easily automate npm packages publication with correct versioning, reducing the related workload for developers to virtually zero.

The key role in the automation process is played by semantic-release. This tool tracks all changes on a selected branch and calculates a new library version based on the previous version and on any commit messages recorded since that last version. The new version is then published to your package repository.

Versions themselves are based on Semantic Versioning standard.

A config file for semantic-release looks something like this:

// .releaserc{   "repositoryUrl": "https://__USER__:__GIT_SECRET__@<REPO_URL>",   "branches": [      "master",   ],   "plugins": [      "@semantic-release/commit-analyzer",      "@semantic-release/npm"   ]}

In the example above, semantic-release is tracking master branch in the repository specified under repositoryUrl.

When one of these two branches is modified, the @semantic-release/commit-analyzer plugin calculates the next version of the library. @semantic-release/npm then takes care of publishing the package with a correct version.

Let us look in a greater detail how semantic-release calculates the new library version. The logic is based on a strictly defined format of commit messages. By default, the format used is Angular Commit Message Conventions which allows to define the release type and version bump with precision.

This flow can be easily integrated with any CI/CD pipeline:

And that’s it! Right? Not really.

Traditionally, the weakest link of any system is the human factor. In this particular case, this human is a developer who may forget to use or mix up commit messages. They may not even be aware of the possibilities offered by Angular Commit Message Conventions and thus fail to utilize the potential.

How can we prevent it?

Git hooks to the rescue! Using husky we can add hooks to ensure that commit messages are properly formatted. Below is the part of package.json config with the setup doing precisely that:

// package.json{   …,   “husky”: {      “hooks”: {         “commit-msg”: “commitlint -E HUSKY_GIT_PARAMS”      }   },}

This config registers a Git hook at the commit stage. The commit message is then validated by the commitlint plugin configured to catch any mistakes. This simple setup is completed by adding @commitlint/config-conventional extension config:

// commitlint.config.jsmodule.exports = {extends: [‘@commitlint/config-conventional’]};

Finally, we can also add a helpful tool for developers that will be working on the project to facilitate adding proper commit messages. All that is needed is an interactive CLI tool, git-cz.

We add an npm script:

// package.json{   “scripts”: {      “commit”: “npx git-cz” ,      …,   },}

And when we run it, we will see an interactive menu that will let us easily select a proper commit message:

The mechanism described above greatly simplifies the versioning and publication process compared to the manual way:

  • the workload needed to publish a package is minimal
  • no versioning conflicts in the case of parallel development — the proper version is calculated automatically
  • interactive commit tool speeds up developers’ work
  • any errors will be caught at the commit stage

At CyberVadis we bet on code quality and on knowledge sharing. This article has been written by our whole frontend team with contribution of many engineers from outside frontend. If you’d like to talk to us about microapps, automation or basically anything, please contact us!

--

--