The Personal Capital Web Stack, Part II

Luis Sepulveda
Apr 30, 2019 · 8 min read
Everyone knows that devs love trees. In fact, it might be the most common way to start a tech blog post. This time we used a Dragon blood tree. We like the way it branches; it is like the organic physical representation of our node_modules folder….

Hey! It’s been a while and we wanted to let you know that the web team at Personal Capital is still alive and responding pings with low latency.

Our blog has finally moved and we’re stoked about it. In fact, we’re so energized that we wanted to do a follow up on what has happened in the 900~ days since we created our last post and now.

In this post, we’re going to cover all the changes that we’ve made to our stack, why we made those changes and also share some of the good and bad things that these changes have brought to us.

Let’s start with the tech (after all that’s probably why you are here).

The Front-End Stack

We continue to use Node.js for development and builds, which we mix with browser-sync to proxy all non-static requests to a back-end server in the Cloud.

This allows us to run a local version of the website app without having to run the back-end server locally. It also comes with the extra benefit of watching our files and triggering local Webpack builds when they change (hot reload baby!). As you’ve probably already noticed, developer happiness depends greatly on how easy it is to work in the environment, hence our first change.

We’ve transitioned from RequireJS to Webpack. That was a huge milestone. Some of the observed benefits include:

  • Improved reload times in development.
  • Intelligent bundling which includes tree-shaking, resulting in smaller bundle size for CSS and JS static resources.
  • Script Chunks. They’re requested on demand, making our load times even faster.
  • Streamlining our build and bundling process. Currently, we’re bundling a couple of different apps from the same sources. Before something like this was hard to achieve; many scripts had to be updated to assemble such variations. This made the process slow and risky. Now we have well-defined Webpack configurations that allow us to make all kinds of combinations just by touching configuration files.
  • Expanded extensibility. Now we can integrate with open source tools already developed by the community (i.e. coverage reports, dependency charts, etc.).

Single Page Application and HTML

Our back-end services are written in Java. The server team has been doing a great job transitioning our old monolithic architecture into microservices. They’ll release an article about that soon, so stay tuned.

Our web service layer handles the request to the APIs and passes the data to the SPA controlled by Backbone and Angular. Backbone is still a key part of our stack, but we’re transitioning out of Angular and into React. Every new component that we have developed in the last three years is written in React. This has allowed us to:

  • Reuse components easily. React itself plays pretty well with our existing backbone architecture. Achieving functionality is super easy by passing props, as long as you pipe the component lifecycle methods and events in the right way.
  • We’ve been using the container <-> presentation pattern for a while. We really like the way it abstracts different concerns, giving our components another level of reusability (e.g. we can reuse the same chart with different data inputs coming from different APIs and make it look different by passing props). Now we’re thinking about transitioning away from this pattern into React hooks that just came out of the oven smelling like fresh-baked cookies. At this moment, we’re figuring out how to integrate our testing framework with this new pattern since enzyme doesn’t provide a way to do so yet.
  • Because of the previous point, our development workflow has entirely changed. We’ve integrated Storybook. Storybook is the first place where we develop components. We start there to ensure that our crafts are fully abstracted from the app, giving us a higher level of confidence when developing truly reusable presentational components.

From the developer’s standpoint, using Storybook was a huge productivity and morale booster. It gave everyone — not just engineers — greater visibility of the components available for use. Also, developing user experiences on it is a breeze since it’s super lightweight and the hot-reload builds are faster than our main web app. This means less time to go for coffee and more time for doing what we love: writing code. The only tradeoff is that we now have less caffeine pulsing through our veins, but the continuous excitement of watching your component coming together pretty quickly makes up for it.

Because of Storybook’s nature of developing components in isolation, web engineers don’t depend on back-end developments, which allows us to work in tandem to deliver a feature without depending on each other.

Since our last post, our HTML rendering technologies have been transitioning from Angular and Handlebars to React.


We have mostly transitioned from everything that we were using, like Bootstrap, to our own in-house styles based on Inuit. We continue to use BEM as our naming convention and KSS for our styling documentation.

Just like Storybook, we’ve also created our style guide to contain most of the basic HTML components with our styling and variations. This is a good starting point for new people to check our styling practices, class names and branding. There are future plans about migrating all the KSS docs to storybook, but we haven’t found time to do so yet.

Data Visualization

D3 is our go-to data visualization library and it has been that way since Raphael.js slowed down its pace of development almost to the point of not being maintained.

We also transitioned to D3 since it has a richer collection of charting modules out of the box, and Raphael charts were very buggy because we had to write SVG manipulation by hand.

Over this period we’ve integrated a couple of D3 extensions and we’ve built a few in-house React wrappers over it. Now we can reuse many of our charts which are highly customizable by props.

You can check out some more details regarding our D3 implementations here.


This is one of the areas that has had the biggest changes. Since we’re writing everything in React right now, we figured why not give Jest and Enzyme a chance; the outcome was pure joy.

Jest and Enzyme came in to replace Mocha, Sinon, Karma, and Chai. Their integration with React is natural, and we have been working hard on porting the old test to Jest and Enzyme.

The port is a story in and of itself. We usually hang out about twice a month in the war room to host what we call a “Jesting session”. During this time, we all work on this large list of test files that need to be ported.

Initially, we tried using Jest codemods, but it didn’t fully work with our test codebase. The converted files still had some old stuff. Then one of our teammates got rad. He wrote a custom shell script to convert the syntax using a bunch of “sed …” commands and regexps, a simple solution often forgotten.

Here is what part of the script looks like:

sed -i ‘’ ‘s/to.equal/toBe/g’ $NEWFILE

This saved us a lot of time by doing more straightforward translations automatically. Of course, we had some more complex cases that required manual intervention as well.

Over the time that we’ve been using jest and enzyme we’ve found:

  • Significant decrease in testing time. Since the tests can run parallel now, this speeds up everything from our git push hooks which executes the unit test to the continuous build that happens at the CI server level. In fact, we’ve just merged some code to only run the test impacted by our changes on the commit hook. This makes the validation process even faster. The code uses the Jest “ — changedSince” parameter, if you didn’t know about this is definitely worth taking a look.
  • It allowed us to test components in full isolation without relying on anything else than what we’re testing. The mock functions work right out of the box with no configuration needed.
  • Easy implementation encourages TDD. This comes naturally. The easier it is to implement a test, the more you’ll enjoy writing them.
  • Out of the box coverage reports, this is something that we weren’t even looking before and we’ve realized it is key to improving our quality.
  • Snapshot testing allows us to make sure nothing was updated magically by our changes, giving us a higher level of confidence when releasing code.
  • IDE integration makes developing tests so easy that everyone is looking forward to writing unit tests. The hot-reload gives you immediate feedback while using TDD.

Testing and quality have become a big part of our day-to-day over time. As we grow, we need to continue to uphold our legacy of trust. The quality of our shipped features plays an important part in this as it does in any financial institution.


We’ve fully transitioned to Jenkins 2. This impacted every dev team in the organization, including the web team. We’ve previously published an article about what this migration journey was like for the engineering team and you can find it here.

Wrapping up

For the record, anything that was not mentioned here remains the same. E.g. for versioning we still use Git and our sprints last one extreme week.

It has been a long journey to achieve the current state of maturity in our development processes and there’s way more to go. In fact, the only thing we’re sure about is that we will never stop making changes to our stack.

As new tech moves in, we need to stay updated. We do not integrate new technologies just for the sake of using what is super cool and cutting edge, but rather we identify what would really make a difference for us and the business. Finding a balance between the two is the key.

An example of the above is that in the past year some of our team members have been experimenting with GraphQL. They even developed a proof of concept during one of our internal hackathons. We find ourselves liking this way of fetching data a lot, but since this would be a major change we haven’t explored this option further. Maybe at some point in the future, this will align with business needs and we can pull this card out of the backlog.

Native technologies are coming along faster. New web APIs are published every month and some of them would certainly fill the holes better than the packages in our dependencies are filling right now. This is the case for native web components API, which is currently being implemented and might replace some of our rendering technologies at some point in the future.


Personal Capital Tech Blog

We are Personal Capital's Engineering team.