Fixing Suboptimal API Integration Experiences

Don’t like to read? Here’s a prezo based on this blog post:

Nothing is more annoying than an incomplete or inefficient API integration experience.

The majority of the time we do an integration with an external API at VideoAmp, it’s like being a kid in a candy store full of chocolate covered bugs. Some symptoms of this include:

Incomplete Specification- Finding out that there are missing or hidden API methods is unforgivable. In this day-and-age of such great (and free) tooling to allow for auto-generation of API docs with respect to source code (like Apiary, Swagger, and a zillion others), there’s absolutely no excuse for this.

Fuxored Example Code- This usually happens when the docs are delivered via PDF, but all of the sample JSON data spans across page breaks and has the wrong kind of UTF-8 quotes which need to be replaced. If I can’t just copy-paste example code out of your docs, I’m going to hate your face very soon.

Given the advent of… well, WEB SERVERS, why not deliver your docs online? If you’re going to give me API docs via PDF, why don’t you also put them on a CD-ROM and snail-mail them to me as well?!?

Not really REST-ful- This is a rabbit hole of a discussion, however the biggest offenses here are not using the HTTP verbs [GET, PUT, POST, DELETE] properly, and instead putting them in the URI. Lacking meta-data for pagination/offsets, and not providing versioning in the URI are the REST of my pet peeves here.

Sandbox not full of sand- Let’s say all of the above is complete, but the sandbox is incomplete. That makes us all very sad pandas.

The first step in integrating with an external API is to write your own spec tests. This is the best way to build sample implementation and to ensure over time that the API does not break. The ideal API provider either pre-releases versions to their sandbox, or even better yet, gives you a means via the URI to control which version of the API you are testing/integrating with. 
Eg: /api/v1.1/foo

When the sandbox is not full of sand, the provider usually asks us to integrate directly into production. This causes significant concern as we are cautious with live calls when they result in spending money, or charging money to our customers. Sandbox implementation HAS to match the docs 100%.

Sandbox has workflow cul-de-sacs- Often times, there are downstream events which must happen in order for sample data to be present. For example at VideoAmp, you can post a video campaign, but we must simulate bidding and winning auctions so that there is data to report upon. The way we solve this is providing “out-of-band” API calls which trigger these kinds of events and load generic fixture data for those events, so that API calls that belong further down in the testing workflow can be performed.

Sandbox has too many kids playing- This is also in the “Say 500 one more time!!!” category. I have worked with a few providers which must have had severely flawed relational database models. Perhaps this is due to poorly indexed relational tables dumped from massive production databases which are not tuned at all, which yield 25–30s response times…. and, the HTTP timeouts in 30s. When more than a few engineers are working in the sandbox, the endpoint times out which makes it impossible to run our spec tests against this API in our Continuous Integration system.

It’s very easy to lose patience with this kind of provider and just move onto someone else.

Only 2 mins / day allowed in Sandbox- One rate in particular limited us to only 200 calls / day in the sandbox. This is probably because of the deeper problem (above). This also prohibits us from running periodic tests to make sure all-the-things still work, and proactively suss out bugs. If you rate-limit development, you’re just begging for engineers to bail.

Immortal Garbage Data- This is annoying because you should be able to delete any data you create. This is problematic when we run automation to integrate, and do 100’s or 1000’s of calls per day. Before long, we have so many objects in the database, it’s far beyond what the system is optimized for. As a result a call to “getFoo” returns 5k records instead of 15, and the endpoints timeout.

When an API does not support DELETE, it’s hard to clean up after your tests. Other alternatives are allowing DELETE in sandbox but not production, or creating another “out-of-band” API call which resets all the database fixtures.

Snail Mail Support times- This one speaks for itself. If it takes you 2–3 days to respond to EACH of our emails, we lose RESpecT.


Some solutions we do at VideoAmp:

☒ Auto-Generated API Docs- We use apiDoc to auto-generate our
documentation, this is a no-brainer.

☒ Sample Toys- Along with OAUTH credentials and an endpoint, we provide sample implementation (in the form of spec tests) for common workflows in multiple languages.

☒ Stellar Unboxing Experience- The first-time user is able to GET, POST, PUST, DELETE data within minutes, see sample JSON responses, and mock up meaningful workflows in hours. The ability to “poke it with a stick” and get immediate gratification is our foremost goal.

☒ “Out of Band” API requests- These trigger mocked events which may: 1) simulate normal process which take hours or days to generate data 2) simulate user-driven events which happen naturally in production, etc..

☒ … and YOU get a Sandbox, and YOU get a Sandbox — With the advent of Docker, each sandbox is unique to the end-engineer, and with an “out of band” call, the entire environment can be reset. You get your own private VideoAmp which is separate from everyone else’s sandbox.

☒ ♬ Here’s my number … so call me maybe! ♫ — We have dedicated API support. This means intra-day email response times, along with a Slack channel as a lifeline.

Originally published at on November 4, 2015.

Like what you read? Give Dave Gullo a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.