Automated testing at Happy Bear Software

Najaf Ali
Happy Bear Software
5 min readMay 1, 2017

How you test your code is a highly contentious topic for developers. At our company we give developers full autonomy to decide on a testing strategy based on the details of the project they’re working on. But just like magazines and publications have a “house style” for how they write, we have a generally accepted way of testing Rails applications.

Our recommended testing strategy has the goal of being good enough most of the time. It won’t be perfect in all cases, and that’s why we allow developers the free choice to use whatever tools they deem necessary. But for Rails applications, we can usually get by with this standard set of tools without much variation.

For us, two sets of tests are important. These are acceptance tests and unit tests. These are also commonly referred to as “feature specs” and “model specs” in RSpec.

For our acceptance tests we use a combination of RSpec and Capybara. We use these tools to exercise the application in a similar way to how a real-life user would. Capybara allows us to navigate the application in a separate process, filling in forms, clicking buttons, and making assertions about the content.

We want acceptance tests to be “end-to-end”. This means that we test the interaction as seen from the outside of the application. When a user clicks a button to create a new comment, we test that the user sees their data updated (rather than test that the data has changed in the database).

This also means that we stay within the boundaries of our application. If in the normal running of our application we have to interface with third-party services like APIs or email providers, then this is mocked out as part of the test.

One of the most important reasons we have feature specs is to maintain a productive development speed. Software is incredibly complex. Building a new feature can inadvertently break three or four features you built before. Having a set of automated acceptance tests gives you a warning when that happens. If you’ve changed the software somehow and it’s causing your tests to fail, you know something is broken and you can track it down to fix it. Without automated acceptance tests, development slows down and the end product is less reliable.

For our unit tests, we use RSpec. Unit tests are there to test methods on model classes in our Rails applications and to test any other classes we’ve created in order to deliver the functionality of the Rails application. These could be things like commonly used patterns like Presenters or Services, or any arbitrary classes we decide to define.

Using a combination of acceptance and unit tests, we like to practice what’s commonly referred to as “outside-in” development. “Outside-in” is loosely defined, but in principle it plays out something like this.

When a client or stakeholder defines a feature for us to work on, we try to come up with a mutually negotiated list of “acceptance criteria” for that feature. This serves as a list of what it means to complete a given feature. If everything in the acceptance criteria is complete then the feature is “done”.

There are many ways to write acceptance criteria, but for us a list of sentences is usually good enough. Here’s what a list of acceptance criteria for creating a blog post in a CMS might look like:

• When the user clicks on “New post” they should see a form
• The form should allow the user to fill out the “name” and “body” fields
• When the user submits the form with a valid name and body, they should be redirected to the posts index and see the message “post created successfully”
• When the data is invalid, they should instead see a list of error messages

As a developer, it can sometimes be difficult to know where to start when creating a new feature. Do you dive into the model and start implementing the database schema you’ll need for the feature? Do you start with the view? The routing layer? The controllers?

Outside-in development suggests that you start with the feature spec, implementing one of your acceptance criteria at a time. So for this create blog post feature, we’d begin by writing a test that clicks on a “New post” button. We’d then run that test and watch it fail because there is no “New post” button. We’d add the button, pass the test, and then move on to testing the next acceptance criteria, implementing enough of the feature to pass the test at each stage. When we have to drop down to writing model code or any code that doesn’t explicitly deal with the request/response cycle, we write the requisite unit tests too.

Developing in this way means that we have a clear, repeatable, obvious place to start when delivering features. Since it works directly on the acceptance criteria, it makes it easy to stay focused on exactly what the client wants on the feature (rather than straying off the path and working on nice-to-haves). It also makes sure that you implement exactly what you need for the feature and nothing more.

Once you’ve completed all the acceptance criteria using the Outside-in method, you can then focus on things that are more difficult to test. You can work on the front-end and make it match up to a design you’ve been given. You can refactor the code to make it more performant, less complex, or more secure. You can change it however you want, so long as your tests still pass.

If your tests still pass it means your acceptance criteria are met and your work on this feature is done. When the acceptance criteria are translated directly into tests in this way, they make the requirements decided on by you and the client a first-class citizen of your codebase.

Taken on its own this is a powerful way to structure your development with a lot of benefits. But also consider what it would be like if your entire application was built from features developed in this way. The result would be a suite of automated tests that fully exercise the functionality that your client has ever defined for the system. That makes it more reliable, more like what the client actually wanted, and much easier to change over time.

--

--