The cost of tracking cost — How we built the frontend for Cost Tracker

Kiran Gopalakrishnan
Tech at Tempo
Published in
11 min readNov 2, 2020

--

Montreal was still reeling from the grey skies of a harsh winter when I first walked through the doors of Tempo Software. In the April of 2018, I started with team Westfalias as a front-end developer, A development team that was building a cloud counterpart to Tempo Budgets, Which would later be named Cost Tracker.

In my previous job, I built Voice Recognition enabled AI assistants for the automotive industry on Android Automotive OS. A very hardware and engineering-focused environment, and not much to do with Front-end or Javascript, as we wrote most of our code in Java and C.

When I started, Cost Tracker was still in its infancy. There was only a single page consisting of a button to create a project and the modal to input a project name. It was also part of the front-end monolith that included the rest of Tempo’s cloud offerings, Timesheets, Planner, Accounts, Teams, Reports, etc.

After working with the existing codebase for a few months, it became increasingly apparent that it would be beneficial for Cost Tracker to be a module on its own and break away from the monolith.

We wanted Cost tracker to be simple, consistent, and maintainable. Moving away from the rest of the front-end monolith would allow us to build it with our very own tech stack that could be different from the rest of the frontend stack, which mainly consisted of ReactJS, Redux for state management, ImmutableJS, and a Django Python server which serves up JS bundles.

And break away we did.

We moved Cost Tracker into its own module with Typescript & React as the main stack and a Flask server that serves up the JS bundles.

Part of the reason we moved it out of the monolith was that we wanted to engineer it rather than just develop it. The difference being that the latter is more concerned with implementation details and seemingly and usually lacks a coherent structure. In contrast, engineering requires that you design, architect, and consider the system holistically as you build it.

This initial idea led us to question what we knew about frontend development at the time, and subsequently led us to ask what if we had…

  1. No Redux for state management.
  2. A self-explanatory component architecture.
  3. Services for managing business logic.
  4. Flat data models over complex and nested structures
  5. Tests that test what we see.
  6. Readability & simplicity over optimizations.

As we discussed these ideas more, we started to gain a clear picture. Allow me to explain some of these decisions in a bit more detail and with some much-needed context.

As a side note, since I will be talking a lot about Cost Tracker, if you are unfamiliar with Cost Tracker or Tempo in general, make sure that you check us out at https://www.tempo.io.

An overview of Cost-tracker

No Redux for state management

Redux(and Flux architecture) was built to solve problems that occur in large scale applications. Facebook is a good example of a situation where you would need something like Redux because the sheer scale of the application makes it next to impossible to determine the next state and debug issues, Without having something like Redux.

When Redux launched in 2015, React’s context API was not usable, as it was only intended for internal use. Redux was very helpful at the time since it got rid of prop drilling when you wanted to share data between different component trees.

But the world has changed since 2015; Redux has too much boilerplate nowadays and, in my opinion, is a little bit over-engineered. React Context API is now officially available and is a good alternative for Redux, And an action-reducer pattern is now readily available via the useReducer hook. We don’t need Redux for that anymore.

Most of the companies/teams don’t have the large scale issues that Redux solves; Neither are they building apps of such scale to justify having Redux.

We weren’t either, so we decided to have instead a simple homemade HOC(Higher-order Component) solution that provided the same API's that Redux provided, but much simpler and without too much boilerplate.

We could always add Redux later if we encountered those problems that Flux and Redux can solve.

After all, we should develop pragmatically while testing and refactoring meticulously; from this, an architecture will emerge that will automatically point us towards Redux once the time comes.

We have yet to encounter something that required Redux, and we are gradually migrating from our HOC solution to a purely Context powered hook based state management that is completely typesafe(Based on one of my personal projects, named Reconhttps://github.com/KiranGopalakrishnan/Recon).

A self-explanatory component architecture

A big part of the front-end architecture can be made visible by having a directory structure representing it.

The old common, components, reducers structure, are outdated and are not very representative of the app's overarching architecture.

We decided to adopt a directory structure that would be self-evident of our component architecture. The idea was that you could take a look at the directory structure and accurately predict where certain logic would be.

The ideology behind the directory structure accurately reflected the ideology behind the architecture.

Each route in Cost Tracker is called a View. All these views would be represented by their own folder under a common folder called views. Each view has one entry point, and it would be a component with the same name as the View; for example, for the following route:

/:project-id/scope

we have a folder called Scope under views, and the entry point would be a component named Scope in a file called Scope.tsx.

Each view consists of features; a single feature is a self-contained tree of components. Each view would have multiple features that make up the said view, and all features would reside under the features folder in their respective views.

These features have no access to any other Views or components except common components shared by all Views and the components shared by features within the view that they are defined in. This is also the only layer where components would be connected to the global state.

Common components were moved to a common folder and were shared by multiple Views; all the common components shared by features inside a View were moved to a common folder inside their respective features folders.

Cost tracker’s directory structure

This structure made sure that the code wasn’t intertwined, and tracking the change's impact was now easy because we could make a change in a feature and confidently assume that the change would be self-contained within the View.

This also ensured that component relationships are predictable and promoted having local states and lifting the state when required instead of moving it to the global state.

This not only brought consistency across our codebase, but it also made it easier to find where a certain piece of code is located, saving us a significant amount of time when we had to fix a bug or trace a bottleneck in our app.

This had some exciting implications on testing features vs. Views as well. We were able to come up with the testing system that allowed us to have unit tests for components used by features. Integration tests for features as a whole, this allowed us to cover features and the interactions that they would have with the user, but this still needs to be fleshed out, we are not using this as much as we should.

Having integration tests on the front end is very beneficial because unit tests test individual components, and end to end tests are costly and require many set-ups. Even then, they often mostly test happy paths. Integration tests would allow us to test a feature as a whole. This could also work as documentation for the feature, provided that you cover all interactions that the user would have with it.

Services for managing business logic

Another decision was to move business logic into files under a services folder. A service is a file representing a domain, and it would contain all the logic and interactions within that domain.

For example:

ScopeService would handle retrieving the project's current scope, importing a scope and associated values, etc.

If you are unfamiliar with what scope is in Cost Tracker, then you can read more about it here: https://tempo-io.atlassian.net/wiki/spaces/THC/pages/473137593/Updating+a+Project+Scope

This is what the scope view looks like in Cost Tracker.

Scope View in Cost Tracker

And this is how the architecture of the Scope View looks like

A simplified version of our component architecture for Scope View

ProjectService is another service that deals with updating the project settings, retrieving a project when requested, etc. The services also dispatched the results of these operations if an update to the global state is required.

The functions within the services follow the single responsibility principle, and composing these functions together would enable us to execute a specific action that can be invoked from the View.

This made our application's business logic independent from the reducers and global state, made it easier to test the business logic, and made sure that the logic to update the application's state was separated from its actual business logic and it’s API interactions.

Flat data models over nested structures

From previous experience and my open-source experiments in Javascript, it has become clear that building a maintainable app has a lot to do with how its data is modeled.

It has always been my opinion that if you separate your data from your Views and really map out all the data that your frontend has or needs, certain patterns emerge. These patterns will lead us to define relationships between our data models. By understanding the underlying data relationships, we can make an informed decision on how we should approach the architecture of the application that we are building.

By constantly deconstructing our data models, we were able to ask if the data in our app made sense and whether it made sense to keep this piece of data in the global state or is it more of a local concern of a view.

We were also able to interrogate the benefits of having this as a global state instead of a local one and ask how this data is related to the rest of our application's data. These questions allowed us to keep our data models simple and helped us avoid nested structures.

The risk of having a complex and nested structure in your data model is that to access this structure and compose the data requires a complex logic. By modeling our data to be flat, accessing data is more of a single level concern, and composing said data could be done easily.

To quote Linus Torvalds.

“Bad programmers worry about the code. Good programmers worry about data structures and their relationships.”

Test what we see

Testing has always been a controversial subject in front-end development. Enzyme, the testing library from Airbnb, was always too focused on the implementation details; the front-end landscape desperately needed something that would allow us not to be concerned with the implementation. Instead, it would allow us to test what we see in the UI.

It made sense to us that we would want to test what we see, which also meant that if we add tests that covered a component's functionality, we could safely refactor that component without changing the tests as long as the output remained the same.

We believe that the tests should represent how your software will be used, and this is exactly the principle behind the react-testing-library, which we adapted for Cost Tracker. It has helped us in ways that we didn’t expect. This was also the main reason we were able to write integration tests for our features.

Readability & simplicity over optimizations.

In my experience, it is common for software projects to get messy and unmaintainable because they were either written to be performant or the code is just too clever to make sense.

In my opinion, no code should be optimized unless you have a specific requirement for it to be optimized. It's usually not worth spending a lot of time micro-optimizing code before it's obvious where the performance bottlenecks are.

For example, I have seen countless front-end developers write a for loop instead of a map or a reduce function when iterating over a large array because a for loop is faster.

A traditional for loop is faster than a map or a filter, but this, and I cannot stress this enough, doesn’t matter. You are actually sacrificing the readability of your code for an optimization that is premature, and in 99.5% of the cases not needed.

It doesn’t matter even if you are dealing with an array of an unknown number of items; javascript has the ability to iterate through a large number of items in arrays within milliseconds(Refer: https://github.com/dg92/Performance-Analysis-JS#results-for-large-data-set-of-array-size-50000---1000000).

In this particular example, map, filter, and reduce were introduced to JavaScript because loops make it hard to follow the logic, especially if they are nested. By using loops instead we would be setting ourselves back a decade to solve a problem that doesn’t exist.

In my previous job, we had rigorous requirements for performance. The software we wrote would be mounted in a vehicle, and most of the vehicles had much less processing power than a traditional computer. And every line of code that we wrote was very performance conscious.

But none of our optimizations actually had much of an effect on the performance of the application.

The real performance improvements came from architectural changes and pre-computing values and such. The fact that we saved 0.3 milliseconds when iterating over an array of fifty thousand items was effectively worthless to us.

To quote Donald Knuth.

“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.”

Understanding the difference between that 3% and 97% is essential for writing maintainable applications. In that 3%, about half of them, in my experience, usually have nothing to do with your data structure or your smaller inefficiencies and everything to do with how your app or feature is architected and how your logic is constructed.

To conclude this overreaching point, I find it necessary to remind ourselves that over 80% of an application’s life-cycle is spent in maintenance. We should pay a lot of attention to the problems of support and maintenance when you’re designing.

“Always remember that code gets read more times than it gets written.”

All these measures that we have adopted, including the ones that are mentioned here and countless others that I have left out, have all been introduced to make sure that the application that we wrote, and continues to evolve, is not only understandable and maintainable by everyone who works on it, But also by those who will work on it in the future.

We continue to evaluate these decisions to understand whether it made sense to adhere to them or if it is time to consider a change. This feedback loop that we created has helped us keep our front-end code maintainable, easy to understand, and scalable.

All of us who were involved in building Cost Tracker feel extremely privileged to have had the opportunity to stray from the traditional ideals and architecture and to have had the chance to experiment with new and exciting things.

And to those who are reading, I hope we inspire you to do the same, to experiment, to imagine things differently, to seek out new opportunities, to make a path where none exists, and to wonder…

What if ?

--

--

Kiran Gopalakrishnan
Tech at Tempo

Professional Opinionist || Bookworm || Writer || Amateur(and Immature) People Critic