Shared Assets

Jico Baligod
6 min readMar 6, 2015

--

at Crowdtap

At Crowdtap, we’re all about the SOA pattern. Both our back-end and front-end applications are structured in SOA fashion, and it’s worked great over the last two years in production. We’ve already talked about how we handle our back-end architecture, so in this article I’d like to take a peek at how we manage our dozens of JavaScript applications and specifically, how we share the common resources across them.

A brief history

In the early days, our site was a single Rails app serving static views, a member home page hosting a JS app built in Spine, and a small API. As our app grew larger, we started feeling the pain of disorganized code, two hour test suites, and slow onboarding for new employees. We decided to break up our monster back-end into several small service-oriented apps, tied together using an in-house data syncing framework called Promiscuous.

What a difference it made.

Two years later we now have over a dozen Rails apps, each responsible for a single domain. We have an app for email notifications, one for targeting polls and other actions to our members, another for tracking engagements— the list goes on. Our codebases were simpler, thus, more readable. Our old two hour test suite now runs in under 20 minutes. Overall, developers are much happier.

At the dawn of writing Crowdtap 2.0, we realized our main JavaScript application had some elements that needed to be reused in other places. So similar to our back-end, we decided to split it up into isolated, reusable components. Our “component-oriented architecture” was working great, we now had two main pages each running two separate JS applications.

Within each app, we would bundle in Bootstrap and either Spine or Angular. But with two apps on a page, we faced a problem with redundant assets leading to bloated, slower page load times, and conflicts with JS libraries and CSS. So, we pulled the common assets out and architected a way to easily add these shared bundles into our views.

Anatomy of a Rails view

Our server-side views are very bare, spitting out a couple divs as mount points for our apps, very little JS, and zero CSS. We try to avoid using the Rails asset pipeline for a few reasons:

  1. It makes more sense and is much simpler to keep individual front-end apps self-contained with regards to its own JavaScripts, styles, and assets.
  2. Managing third-party dependencies for our front-end apps in Rails while we develop in isolation with Node is very disjointed.
  3. Bower is a more appropriate tool for JS apps, especially with version numbers corresponding to library versions (as opposed to Gem versions).
  4. We are trying to slim down our Rails apps to be nearly API-only services.

A big question is: How do we manage injecting our front-end applications and shared dependencies into our Rails views? I’ll go over two things that will explain our setup: our deployment strategy and a helper called DynamicAssets.

Front-end deployment

Deployments for our front-end applications are simple, quick, and allow us to deploy dozens of times a day.

  1. Merge some new code into master and push it up to GitHub.
  2. GitHub hook notifies CircleCi to run the test suite (npm test).
  3. On a successful build, notify our deployment server.
  4. Deployer checks out the successful commit, builds the app for production, and pushes files to S3 namespaced by app name, branch, and commit hash.

The fourth point is important for versioning app code and its assets. It lets us push changes frequently without having to invalidate caches. Plus, if anything should go wrong, we just revert back to an older, working commit namespace still on S3.

DynamicAssets

With everything we need on S3, it’s easy to add links and scripts into our views so long as we can construct the URL to these resources. We have a fixed naming convention for how apps and their assets are pushed up to S3. The main dynamic variable is the latest commit hash.

To handle the changing commits in resource URLs, we’ve done a couple of things. First, our deployer updates a file called LATEST under the root of each app namespace folder on S3. As you may have guessed, this file simply holds the git hash of the latest build. We can then poll the latest hash of each app every few seconds and keep track of it.

With this information, we can construct the URL for any of our apps for the latest set of resources deployed to production. To make things even easier, we’ve written a little helper called DynamicAssets, which provides methods for grabbing the latest hash string and a view helper for inserting link and script tags pre-populated with these resource URLs.

Our DynamicAssets helper to help construct dynamic asset URLs.

View markup

Now for an example. To include a link tag and a script tag for the latest bundles of any app on any branch, we use our helper dynamic_assets_tag and specify what we want.

<%= dynamic_assets_tag ‘crowdtap.header’, branch: @header_branch %>

It defaults to looking for our conventional appName.css and appName.js. But sometimes we want certain styles or scripts, usually from our shared assets.

<%= dynamic_assets_tag ‘assets.shared’, scripts: [‘assets/js/angular-1.3/angular.min.js’] %>

Below you’ll see our rendered Portal page markup. I’ve stripped out things like meta tags and actual inline script contents to make it easier to digest.

A stripped down version of our Portal page markup.

Sidenote: I realize having link tags at the end of body is ugly, it’s something we need to fix.

The shared assets

What exactly makes up our shared assets? We actually treat it just like an app. Essentially, our assets.shared repo is a collection of shared resources mostly pulled in using Bower and a Grunt file that builds our CSS and JS bundles. We even have a few tests for some of our global JS helpers.

Anything that is used in more than one application is considered a shared asset. Below are some examples of what we include.

  • Frameworks like Angular, Bootstrap 2 and 3, Polymer, and jQuery.
  • Commonly used libraries including Q and a Base64 encoder.
  • Custom utility classes such as EventBus, which acts as an event broker between apps on the same page, and ActionFetcher, responsible for querying for the latest targeted actions for a member.
  • Recently, a small number of our own Polymer Components.
  • Bootstrap Less variables and our own Crowdtap global variables according to our styleguide.
  • Font files.
  • Testing resources like Selenium, Chrome Driver, and helper scripts to run tests in parallel.

Local development

So everything is working in production, but how do we access our shared assets in our local development environment? Let’s take a look at the requirements for our Node apps.

  • The app should mirror the production view when running locally.
  • The app should remain isolated from the Rails ecosystem during local development.
  • The app should be able to access the latest shared resources.
  • The app should be able to use Bootstrap and Crowdtap global Less variables within its own Less stylesheets.
Dropping assets all up on our apps.

There are a few ways to meet the above requirements. We could have mimicked the DynamicAssets helper in JavaScript; or used a Bower + Grunt/Gulp combination to download shared assets and move them into the right place. Instead, we opted to write a tiny Node utility called Parachute to do what we need.

Using an postinstall hook, we run parachute install after every npm install. Parachute takes a look at a manifest file on the app (client) that lists one or more hosts (git repositories containing assets), resources it wants from each host, and the destination of the resources. Then it pulls everything from the hosts and puts it in place. For example, in most of our apps, we grab shared Less files (Bootstrap variables and mixins and Crowdtap styleguide variables) and place them in a css/shared directory.

With this approach we can include our global JS and CSS locally, and even @import our Less variables in each app’s own styles. Assets across our dozen front-end applications are eventually synced so we know exactly how our app will look and behave in production, even as we develop in strict isolation.

If you like office dogs, sumo wrestling, and the kind of stuff in this article, come work with us! We’re hiring.

--

--