Planning ahead: taking a prototype to production

Jeremy Wells
GoToAssist Product Blog
5 min readAug 4, 2015

--

In May this year we held our annual hack week. This is an event where every member of the company can participate in building and presenting new ideas, big and small. It’s a great event and an opportunity to work outside the normal environment, with new people and new ideas.

This year I teamed up with a designer and a Rails engineer to create a project called Superhero. We prototyped it rapidly in Rails and ReactJS. The project won the internal “Best New Standalone Product” award, and so we’re now working to bring the idea to a larger audience. I won’t discuss the idea itself — this article is about how we hacked together a prototype in a way that allows for later progress.

Technology Stack

When it comes to choosing the technology for a hack week project, it is tempting to use the time to learn new things. That’s fine — but our goal was to have an idea up and running at the end of the week, not experiment with new technologies. As I’ve been working with NodeJS recently, my first thought was to use that as the platform. This made sense from the feature view as we knew that the product would benefit from realtime communications.

However, we knew we only had a week, and when demoing time is accounted for it’s really only 4 days. Given our skill set and our goals, it was easier to start with Rails. My requirement was that we could grow it later if we needed to.

So this is what we built on:

  • Rails 4.2
  • MySQL
  • ReactJS via sprockets
  • websocket-rails

We had quickly realised that to build fast, we needed to use what we knew and what we had. We knew Rails, we knew sprockets, and we had MySQL installed for other projects. We wouldn’t have used any of these had we planned up front.

Interface points

I’ve spent much of my time recently designing and constructing services in a micro-service infrastructure. When you’re writing services you think about the interfaces a lot. Messages get passed around, and conventions become important.

When you write a monolithic application you don’t need to think about this sort of message passing and convention. You can make function calls any way you like and it’s easy to change the code later on. If you take this approach then adding features is easier for quite a while, but it will get harder and maintenance overhead will increase. At some point you might like to split the application up, and move in the micro-services direction. I’ve had experience with this; it’ll be a hard job.

As soon as we started with this project I was thinking about those interface points. How could we structure it in a way that would keep us iterating on a monolithic application for a week, but be able to break it down quickly if we took it further?

Events

From the outset, we wanted to make the user interface reactive. Every time a change occurred, events would be pushed via websockets. We started with a very simple pattern:

Considering our deadline and the scope of our application, this seemed fine. But then we wanted to send emails too:

Now insert your favourite refactoring comments here. Those things shouldn’t be in the controller, they should be in the model / a module / a service object.

Shh…

Recently I’ve been using the wisper gem in Ruby programs. It’s a really simple event system within the application. There are various ways to modularise a Rails application, with differences in complexity and testability. Starting up front with an event-based modularisation means that progressing to a services architecture is easier.

Here is a simple example:

Okay, so it’s not the best code in the world; it was still a hack week project. But now by following this pattern with every controller action, we’ve created a separation that allows for adding new subscribers without touching any other code, just like micro services.

Background

You’ve probably spotted that email being sent synchronously with the web request. Indeed we had several things going on that should happen in the background. For the purposes of our demo they were fine. We needed to address them straight after.

You might also have noticed that publishing happens with IDs, not objects. Wisper would support sending the object, but an asynchronous call would need to send primitives only. Sending the ID isn’t the best option, as it will require the receiving service to either connect to the database, or request the object via the API that we were exposing. A better option might be to serialize the object. However, this is a tradeoff of speed vs flexibility.

I installed the wisper-sidekiq gem. Now the subscribers can run in the background just by changing the subscription call:

Once again we get things done by only changing the subscribing service.

Asset Pipeline

I’ve never been a huge fan of sprockets, the default Rails asset system. It gets hard to understand what is happening when JavaScript files can be loaded from anywhere to appear in a combined file. However, it was very quick to get up and running with the react-rails and sprockets-es6 gems.

One of the first tasks I undertook after the hack week was to extract the JavaScript client code out of the Rails asset pipeline and have it compiled separately using webpack. This meant replacing all the global variables with requires:

var { PageHeader, Grid, Row, Col, Navbar, Nav, NavItem, Badge, Glyphicon } = ReactBootstrap;

Becomes:

var React = require('react');
var { PageHeader, Grid, Row, Col, Navbar, Nav, NavItem, Badge, Glyphicon } = require('react-bootstrap');

We had missed so many of these setup lines, which exposes the trouble with Sprockets and global variables. Had the file loading order changed, then things might have broken in unpredictable and hard to debug ways.

Next steps

Right now the application will be capable of scaling to a small number of real users. We’ll concentrate on features rather than improving its architecture. If it grows, then the next places to scale will be the search engine and the websocket notifications.

For search we’re using MySQL text queries. To upgrade we can drop in a new subscriber than publishes items to an external search service. That way we avoid changing the models and controllers.

Notifications can be solved in exactly the same way. A new subscriber would be able to publish to an external pub/sub service, like Faye.

For each of these tasks there are various gems and existing conventions that provide this functionality. However, that would involve adding to the existing model and controller code, and we would end up with a tightly coupled system.

Conclusion

We’ve shown that we can create a monolithic application that behaves more like a set of micro-services. There are no model callbacks or complicated controller behaviour. When and if we move code out of the application into separately running processes there will be no big refactor.

--

--