Building the new SalesLoft API

Steve Bussey
Salesloft Engineering
10 min readOct 3, 2017

At SalesLoft, we use Ruby on Rails, Angular, React, and other technologies to bring a world-class sales experience to our customers. In an effort to share some of our experiences using these technologies with the larger community, three members of our engineering team presented talks at CONNECT.TECH — one of the largest web and mobile development conferences in the Southeast.

This first post in our CONNECT.TECH series will build on software architect, Steve Bussey’s conference talk, “Get Your Rails Out of My Ruby” and further explore our approach to a world-class API experience.

The engineering team at SalesLoft has been hard at work on a new version of our API. We believe that a strong API will empower our customers to generate integrations that are suited for their complex and custom business processes. We’re taking the new API seriously, and with that comes a certain amount of requirements that we haven’t had to deal with before.

New API Requirements

  • Effortlessly release new versions of endpoints without breaking existing integrations
  • Documentation always reflects 100% of the capabilities of the API, for all versions of the API
  • Pure REST through reification
  • Support for multiple ingress points: web, sockets, etc.
  • Consistency in end-user interface and internal development across the entire API

We do have a V1 of our API, which had some similar requirements, but that API did not take the requirements to the same level as this project. For example, we build our documentation using Swagger annotations in our Ruby on Rails controllers; these annotations live next to the code and should be updated when the code is updated. However, there is no enforcement that the annotations are updated when the code is updated. It is entirely possible to update the API without updating the documentation.

In addition to documentation, our strategy for new versions would be to subclass or re-implement controllers. This would require that the entire cross-section of the API exists in two places, either by subclass or duplication.

A Rails Approach

We use Rails for our main application. Rails has actually been really good to us over the past few years, allowing us to quickly build our application with a strong test suite. However, some problems emerged in the V1 of our API, built in a traditional Rails way that we wanted to avoid in V2. Let’s look at a shortened traditional Rails request life cycle.

Traditional Rails web interface. Controllers are tightly coupled to business logic and can’t be invoked standalone.

In this traditional Rails way of doing things, every endpoint is represented by an action in a controller, with each controller being a noun in the system. Each controller would interact with business logic either in its methods, or hopefully in objects that live outside of the controller. This approach is very obvious, scales well to a team of Rails developers, and is very flexible. However, this approach also has some problems.

Flexibility is great for quickly building software, but flexibility can make scaling a development team and codebase very difficult. In a large application with hundreds of endpoints, each development team is going to approach their development slightly differently, which is possible due to the flexibility that Rails allows. The best way to catch differences in the code is through “developer processes” which can actually take ownership away rather than empower the team.

Another issue with the traditional Rails way is the large seam that exists between the web interface and the business logic. In the above graphic, this seam is represented by the many grey lines going between the web interface and the application. All of these lines represent a dependency that exists between these two systems. This exists because the Rails controllers are invoking these service objects, which requires them to setup state or other objects in order to execute. This seam means that the application is only usable through the web interface. If an engineer wants CLI access to a feature, they would need to find out how the controller works and invoke that code manually. If they wanted a web socket interface to the application, it would involve new development and each piece of the seam would need to exist in both the web and web socket codebase.

Decoupling web and application code

I heard a talk a while ago that really stuck with me, Uncle Bob’s “Architecture, the Lost Years.” This talk discusses why we immediately go to a framework’s way of doing things over the way we want to do something, and how that causes a breakdown of architectural principles. This got me thinking over time about how I would want to build our new API, if I wasn’t constrained to the Rails way. This new image is a very high level of what I wanted:

Decoupled controllers, front controller bridges from web to application logic, keeping them separate

In this image, the web interface is much smaller, using a single seam to connect the Rails code to the business logic. The business logic can be run on its own, making it very easy to add a new type of interface by adding a second seam to the image.

The key to this design is the small seam between Rails and the SalesLoft business logic. This small seam allows for a decoupling of our code to Rails code. An upgrade to Rails shouldn’t impact anything in the “Application” circle, which is a majority of the code.

Building the API

Our ideas and goals were heavily inspired by Amber @ Stripe’s post on how Stripe avoids breaking changes in their API. Stripe has recently released a more recent post which goes into more detail.

When designing the actual code that would go into building the API, we broke down our API into a few high-level concepts: methods and resources. Resources are the objects of our system, and methods are the runnable code when a request is invoked. We also thought about version bumps, documentation, and consistency in order to achieve our goals.

High-level design for how an API executes: Route provides a pipeline for a version, which executes on a request and gives a response

In the above diagram, a very basic request/response lifecycle is established without the use of Rails; this exists entirely as a standalone concept. One great thing about controlling the request/response flow is that we can inject a concept like versioning into that flow. In our flow, each Route has the ability to provide a Pipeline which can be executed on. This Pipeline has the correct Compatibility Adapters for that version of the API.

Our Route generation happens via class naming (this is different from Stripe’s approach which is a changes definition). For instance, we would define people#index like this:

class Api::Method::People::Index < Api::AbstractMethod

If we were writing a breaking change in V3 of our API, we would create a file like:

class Api::CompatibilityAdapter::People::Index::v3 < Api::AbstractCompatibilityAdapter

Api::AbstractCompatibilityAdapter is an interface which responds to both request(request) and response(response). This gives engineers a place to hook into the lifecycle and change up the request or response of the latest version of our API to work with an older contract we established. We would be able to handle deprecations such as parameter renaming, removal, etc. through this method while only maintaining a single code path. Inheritance is not used in this system (other than our abstract classes, which is really a duck type).

This way of defining methods, “actions”, and compatibility adapters helps us achieve our “consistency in internal development” goal. There is only one way to approach API development internally. This means that one must use these tools to build the API and can not do something that might be familiar but incorrect, like one could with Rails.

Tying our API into Rails

So far, we’ve established the “Application” part of our earlier diagrams, but without a web interface. We have still chosen to use Rails as our web interface for a variety of reasons:

  • ActionController::Parameters for securely handling parameters, beyond a simple Hash
  • Easy to use router and response generation
  • Ties into our existing routes without something more complex like an Engine or a separate app

We achieve our Rails seam through only three adapters. There is an adapter for requests, route generation, and response handling. We utilize a really natural part of Ruby to do these adapters, using self to give a context to our adapter. Let’s see what that looks like for our most simple adapter:

class Api::Rails::RequestAdapter
def self.request(context)
Api::Request.new(params: context.params)
end
end

This adapter is really simple for us, it takes the params object from a Rails controller and returns a new Request with those params.

Our Api::Rails::RoutesAdapter uses a similar technique. That file is slightly larger, but here’s the most important part of it:

routes.rb

Api::Rails::RoutesAdapter.define_routes(self)

routes_adapter.rb

pipeline = route.pipeline(version: version, max_version: @router.version)
options = { controller: "front", action: "execute", pipeline: pipeline, version: version }
ctx.send(route.type, route.path, options)

In this example, our routes.rb context is passed to the adapter which allows it to use methods like get or post as if it were in the router. One small oddity is passing the pipeline to the options of that call. We do this because we are able to access params[:pipeline] in our FrontController, which allows us to know what pipeline to use to serve a request.

One controller to rule them all

We use the front controller pattern in our new API. This means that a single controller serves the entire API. Let’s look at the best part of that controller:

def execute
version = params.fetch(:version)

request = Api::Rails::RequestAdapter.request(self)
request.authentication_context = authentication_context
response = params[:pipeline].execute(request)
Api::Rails::ResponseAdapter.response(self, response, version: version)
end

The use of params[:pipeline] is great here, as it allows our Router to determine which pipeline should be used for each Route a single time. If we were not able to pass objects as default parameters like this, we would need to have a mapping in the FrontController of endpoints to pipelines, which would duplicate our routing work.

So far, we’ve achieved our “pure REST”, “multiple ingress point support”, and “consistency in internal development” goals.

Documentation & Resources

So far, we’ve discussed our specific technique of separating our web layer from our application layer, but we haven’t gotten around to most of our objectives yet. To me, the single biggest and hardest objective is documentation that is 100% correct. This means that the documentation can not be wrong when it comes to supported parameters and response fields.

We achieve parameter support by requiring that parameters are documented before they can be used in the request.

class Api::Method::People::Show < Api::AbstractMethod
response_resource Api::Resource::Person

param :path, :id, :string, :required, "Person ID"

def
execute(request, response)
response.resource = find_person(request)
end

def
find_person(request)
request.scopes.people.find(request.params[:id])
end

swagger_api :show do
summary "Fetch a single person"
notes <<-COPY.strip_heredoc
Fetches a single person, by ID only.
COPY
response :ok, "Success", :Person
end
end

In the above code, if the id parameter was not defined in this method, then params[:id] would always be nil and the tests would not pass. It is possible that the grammar, voice, or meaning of the text “Person ID” is incorrect, but 100% parameter coverage is a good starting place to ensure the documentation is correct.

We can also see the swagger_api method call in the above Method. This is used by our modified version of the swagger-docs gem, which can look at Method instead of just controllers. We can document the actual endpoint here, as well as response codes and values.

Another source of documentation is response attributes. We are in control of our own Resource concept which, again, gives us complete control over this aspect of the code.

class Api::Resource::Person < Api::AbstractResource
description "Person in SalesLoft"
integer :id, "Person ID", example: 1
date :created_at, "Datetime of when the person was created", example: Time.now
date :updated_at, "Datetime of when the person was last updated", example: Time.now
end

We are able to document all of the available attributes here, as well as example values and types. We turn this information into an ActiveModel::Serializer under the hood, as well as preserve it for our documentation generation needs.

While this solves our documentation needs, it also solves a large challenge which is consistency in response attributes. Our goal for the API is to have at max two types of resources per noun, a Noun and EmbeddedNoun. All normal API usage would hit the full Noun and embedded objects would use the EmbeddedNoun. This brings challenges such as performance overhead of fetch returning the same attributes as list, but the consistency is worth that tradeoff. We do have techniques to handle performance and edge cases in our resource concept.

Documentation screen of our v1 API, which v2 is compatible with

We now have gone over how we’re achieving all of our objectives for the new API. I strongly believe that the achievement of these goals, along with full coverage of our API, will allow our customers to achieve their most complex needs on our platform. In addition, increased consistency and focus on performance will improve our platform as a whole, which is a nice side effect from this exercise.

The biggest lesson for me, personally, throughout this experience has been to formulate and set out to achieve what we want with our API design, rather than figuring out how to fit in a prescribed box of the Rails way. I recognize the need and end result isn’t for everyone here, but I believe it will positively change how we build APIs here at SalesLoft over the next few months.

--

--