Optimizing the feedback loop: a key to great developer productivity

Cezar Craciun
Hootsuite Engineering
10 min readMar 31, 2023

All teams aspire to move fast with confidence. As you add more features, deploy more often, and expand your team, everyone needs to be confident that changing the code will not break the product for the users. Using monorepos, microfrontends, and faster tooling can speed up your iteration process while still maintaining high confidence in your work.

This article explores the importance of a robust feedback loop in software development, and how new tools can help optimize it. Specifically, we take a look at how Hootsuite Analytics addressed the challenge of maintaining a tight feedback loop by adopting an unbundled development server, modularizing its codebase, and enabling independent deployments.

The feedback loop

If you have maintained any large codebase, you probably know this by now — having correctness checks and tooling acting as safety nets is what gives us the confidence to deploy often without worrying about faulty code.

The more and faster you want to change your systems, the more you need a fast way to test them.

So if that’s the case, then we should add as many automated checks as possible to give us feedback. Whether something is correct or not, we should know it. And ideally, we should run those checks as often as possible to prevent us from going on a bad track for too long. By doing so, we’re in fact creating a feedback loop between the developer and the tooling.

The developer feedback loop
Figure 1. The developer feedback loop

Shifting left

One of the trends that are still going strong is developers relying more and more on static code analysis tools to improve the quality and ease of maintaining their code. These tools catch most mistakes or potential issues before the code reaches production.

Shifting left” means finding problems earlier in the development process, which is more cost-effective than fixing them later. This concept originated in the security space, but applies to all aspects of software development, from architecture decisions to bug fixing. By detecting issues earlier in the workflow, it’s easier and less expensive to resolve them.

The cost of fixing issues in the development process
Figure 2. The cost of fixing issues in the development process

Frontend developers have come to appreciate the benefits of static type checking so much that there is a TC39 proposal to add type annotations directly to the ECMAScript specification. At Hootsuite, we are also big fans of type safety.

Linting is also adopted as an essential part of how we write code. If you use React, you might be familiar with the official ESLint rules for hooks. These are not stylistic recommendations but rather warnings that prevent you from shipping bugs.

Developers love these tools not only because they catch bugs, but also because they are integrated as closely as possible to the code, right in the editor. Those squiggly red lines tell you if something is not right as you type your code, sometimes even giving you fixes one click away. The feedback loop is as tight as possible; that’s perfect.

Progress bit by bit

Have you ever written a complete feature, 1000 lines of code, without manually testing how it behaves, such as viewing the UI in a browser or verifying that your endpoint is functioning properly and returning data? If your answer is yes, dear person, I’m truly concerned for you.

While static code analysis is helpful, it can only take you so far in ensuring code quality. Writing code is an iterative process that involves writing a portion of code, testing it, refining it if necessary, and repeating the cycle.

If you’re a front-end developer, you’re probably familiar with development servers. In the past, Webpack was the most popular tool for this purpose. At Hootsuite Analytics, we used Webpack for bundling and serving the local development environment. However, we recently switched to Vite, a tool that has gained popularity among developers.

With Vite, we can see the results of our code changes on the development server in less than a second, compared to the five seconds it used to take with Webpack. This significantly shortens our feedback loop and allows us to work more efficiently.

When used in combination with Hot-Module Replacement (HMR), which allows for switching out files in the browser without triggering a full page reload, and the Fast Refresh HMR enhancement that enables persistent component state between updates, Vite truly feels like instant magic.

Fast refresh and HMR instant updates
Figure 3. Fast refresh and HMR instant updates

Why is Vite so fast? It’s because it uses an unbundled dev server compared to the bundled Webpack dev server. This is possible because modern browsers natively support ES modules.

With Vite, any time you change a file, only that single file is rebuilt and then cached indefinitely. This allows your application to be progressively updated bit by bit. In contrast, bundled dev servers require every change to be re-bundled with the rest of your application before your changes can be reflected in your browser.

Using Vite ensures that your dev server remains speedy as your codebase grows, whereas Webpack and other bundled versions of dev servers tend to slow down.

Webpack vs Vite build
Figure 4. Webpack vs Vite build

It’s never a good idea to rely solely on manual testing for your software features. If you don’t write automated tests, such as unit tests or end-to-end tests, it becomes easier for someone else (or even yourself) to break the code later on. By writing automated tests, you can ensure that your code remains reliable and free of bugs.

Furthermore, automated tests can serve as documentation for your code, helping the next engineer understand its specs. Here at Hootsuite, we pay as much attention to our tests as we do to the production code itself.

However, writing tests comes with trade-offs. Having too many tests can slow down your CI pipeline and become a bottleneck. Waiting for your test runner to check your entire project can also become impractical. For example, the Hootsuite Analytics front-end app has around 850 test suites, totaling approximately 5200 tests for unit tests alone, and the number keeps growing as we add more features to the product. End-to-end tests are even slower, as they require a full browser to run.

There are many other possible feedback providers that you can incorporate into your workflow. However, it is often unnecessary to verify your entire codebase when making changes. This is where modularization and running only the checks needed by the affected code in pull requests can help you save time and make the process more efficient.

Example of feedback providers for frontend developers
Figure 5. Example of feedback providers for frontend developers

Monorepos: modularize and cache

As a successful product evolves, its codebase inevitably grows as more features are added. The product’s complexity increases and checks are added to prevent bugs or performance regressions. However, this can lead to a new problem: slower CI feedback, which can negatively impact productivity.

Monorepos have become increasingly popular in recent years and are now the preferred solution for streamlined and efficient code management for many organizations. One reason for this popularity is their ability to promote code sharing and modularization, as well as the ability to use tooling to improve workflows by running checks only for changed code.

The first step in creating a monorepo is to establish clear dependencies. For a large application, it’s best to divide it into modules. Determining which code should be placed in each module can be challenging, but a common approach is to identify your product verticals and use them as a starting point.

For the Hootsuite Analytics front-end application, modules are represented by the sections in the navigation sidebar. These are called “feature modules”. Additionally, there are two special modules: the “app” module, which connects all modules and handles routing and user data, and the “common” module, which contains shared components and functions.

All modules should be self-contained. They should not know about their parent nor care about who uses them. Maintaining encapsulation is crucial in controlling complexity and avoiding circular dependencies.

The Hootsuite Analytics team enforced encapsulation by establishing import boundaries through a custom ESLint rule. Later, when TypeScript introduced project references, we switched to using them. We ended up with something that looks like this:

Analytics modules graph
Figure 6. Analytics modules graph (the real one also contains multiple small highly reusable packages that for the sake of simplicity are hidden in this example)

Notice how there is a clear direction of how those modules depend on each other. If I’d have to change the industry-benchmarking module I’ll need to build the common module as a dependency. And because app uses the industry-benchmarking I’ll need to build app too in order to check if my feature module changes broke the integration. However, building the app module requires building all other feature modules as well, which can slow down the process. To overcome this, we can cache the feature modules, as their app parent did not change directly, they should not change either.

Caching allows us to save the output of tasks, so we don’t have to waste time and resources recomputing them. This means that on the same machine, we can avoid repeating the same build or test processes multiple times.

Using a cloud-distributed cache can maximize your team’s workflow efficiency even further. If a command has already been executed by a team member or the CI, there’s no need for others to repeat the process. This can save each developer several minutes per day, resulting in hours or even days of regained productivity for your organization, depending on its size.

Using monorepos offers many benefits. The best part is that you get to choose which features to use. Different monorepo tools address different problems. At Hootsuite Analytics, we use Nx for our TypeScript codebase. With Nx, we have been able to cut some check times in half by running only affected code!

Getting the duration of our CI checks down to half by running only affected code
Figure 7. Getting the duration of our CI checks down to half by running only affected code (the results can differ based on how much code you are changing in your pull request)

Keeping track of all the dependencies in your code can be confusing. However, with the right tools, it can be made easier. A modular codebase allows for the continued growth of your project while retaining the ability to easily share code. These tools help us understand the relationships between different parts of our code and keep everything organized.

Microfrontends and organizations

In computer science, there are usually two ways you can improve performance, you either do caching or parallelization. So far, we’ve seen the benefits of having a modular app: we are able to compose multiple modules at build time and cache as much as possible.

Despite utilizing caching and modularization, slow deployment velocity can still occur, often due to organizational issues. If coordinating and orchestrating deploys across teams becomes challenging, it may be an indication that a microfrontend architecture should be considered.

Teams value the freedom to choose their libraries, code writing style, and data handling methods. In large organizations, team autonomy is highly valuable.

Microfrontends can take on various forms, such as entire pages or small fragments. They can be combined during the build process using npm packages, or loaded dynamically at runtime. Communication between these microfrontends can be established through props, web platform APIs like local storage, or more decoupled solutions like a pub-sub message bus.

When it comes to using microfrontends, it’s crucial to minimize communication and data sharing as much as possible. This will help avoid coupling between your different apps. If you’re not careful, you could end up with a complex and difficult-to-maintain distributed monolith instead of a well-organized microfrontends architecture.

Hootsuite is a dashboard app with a variety of features. To manage its code, it uses a runtime composition pattern that breaks it down into smaller apps, known as async apps. The Analytics section of Hootsuite is a standalone application, but its modular design means that we can break it down into smaller, individually deployed microfrontends if necessary.

Hootsuite dashboard and microfrontends
Figure 8. Hootsuite dashboard and microfrontends

The Analytics application is a weak dependency for the dashboard host. This is due to a minimal interface and reduced inter-app communication. As a result, we can deploy multiple times a day without worrying about breaking other parts of the product.

By simplifying what we need to pay attention to, we can speed up our development process and avoid unnecessary builds and deployments. This makes our work smoother and more efficient.

Conclusion

We’ve seen how optimizing builds can lead to faster iterations. Encapsulating the code into self-contained features enables us to cache tasks and prevent building the same code repeatedly. And ultimately, how modularization allows us to decouple even more by having independent deploys and granting teams more autonomy over their codebase.

A fast and robust feedback loop is key to ensuring that software development is not only adaptable to change but also quality-driven. Prioritizing these aspects, which allows for faster iterations, enables teams to continuously improve and deliver better products to end users.

Do you have any tips for optimizing your feedback loop and promoting faster iteration? We would love to hear about your experiences, so please share your thoughts in the comments below and let us know if you found this article helpful.

--

--

Cezar Craciun
Hootsuite Engineering

Senior Software Engineer on the Analytics team at Hootsuite