Takeaways from API Specifications Conference 2020

Richard Rance
Vendasta
Published in
18 min readSep 15, 2020

Each year I look forward to attending Saskatchewan’s premier tech conference Barcamp Saskatoon. It was canceled this year so I have been looking for somewhere else to get my fix.

When I saw an ad for ASC2020 on the Open API Initiative’s website I got excited. The API Specifications Conference is bringing together leaders in the world of gRPC, GraphQL, JSON Schema, AsyncAPI, and Open API (Swagger). Last year’s conference ran October 15–17 in Vancouver. It is a change in the scope of the APIStrat conference that ran for 9 years prior.

Looking at the session list many of the topics overlap with things I have recently been researching or that are on my to-investigate-later list. With the ease and low cost of attending an online conference, it was a no brainer to hear other people’s perspectives. I would have gladly taken the two days off to attend. Thanks to Vendasta’s generous continuous learning program every developer can use 5 days per year for events like this and they may even foot the bill.

I learned so much that I want to reflect on that I am actually going to include a table of contents in this blog.

This year’s format

Day 1 Sessions

Day 2

Final Thoughts

This year’s format

With everyone staying home the conference moved online like so many other things. The user experience flow was a little rough as the team tried to stitch together 4 or 5 different platforms. I had the pain of picking out sessions in 2 of them and filling in profile details in each. I can only imagine the pain of the organizing team trying to keep session descriptions in sync. It would have been nice if this guide was available on the main site instead of coming by email the day before the event.

The main event site used exceedlms.com to let you pick out sessions and watch live streams of either zoom meetings or pre-recorded videos.

It was fairly well integrated with tribesocial.com to provide a chat client with a room per presentation plus support for direct messaging attendees.

They used airmeet to simulate a networking session at small tables for those who were interested. Unfortunately, the link did not get shared within the main platform so attendance was low.

One benefit of an online conference is all the talks were recorded and will be posted online later. This is good because there were a few times I had to pick between a few interesting topics. I’ll go watch the other topics once they are available.

Day 1 Sessions

After a day of attending sessions, I had many tabs open with links that were shared in the chat and several pages of notes. You can watch the recordings when they come out but here are my takeaways from each session.

Keynote 1: Standards and APIs — Mark Nottingham, Fastly

Mark gave us a history of web APIs beginning with MOMspider in 1993. Then shifted into discussing where we are at today which can be summed up by xkcd 927.

Attempts to create standards have been taken at a team, company, or even national level such as an initiative by the Australian government. The IETF is starting a new HTTP APIs working group separate from the working group for HTTP. There are no special titles needed to contribute so go ahead and join the mailing list.

In defining standards he says we are looking for guidance on the following

  • URI design
  • Naming conventions
  • Format conventions
  • Rate limiting
  • Versioning and lifecycle
  • Extensibility
  • Authentication
  • Caching
  • Error handling
  • Retry behaviors
  • Events
  • Pagination
  • Filtering and sorting

Keynote 2 — Playing To Our Strengths by Lorna Mitchell, Vonage

Lorna was introduced as a driving force behind version 3.1 of the OpenAPI Spec so it is interesting to hear her views.

The trend in the API world is to push for “Description First” in the form of an Open API Specification .yaml file. Some people have taken it a step further to “Docs First” but Lorna would like to go one step further to “Design First”.

In designing APIs she would like us to look at the flow of user interactions the same way that we do for apps with a graphical interface. What does a user need to be able to make an API call? Where do they find that information? How do you chain multiple APIs calls together and pass data around?

Examples with real values are worth 1000 words. This applies to both documentation and design walkthroughs.

Using JSON and YAML as the description language for APIs is boring but it is a form of inclusive design. Anyone can view and edit them with a simple text editor. This makes it possible for a whole ecosystem of tools to spring up around them. People with visual disabilities who can’t view your company’s documentation site can still put the OAS file into their tool of choice.

Having open tools that are free to use requires a different form of payment. You need to contribute back to them. Filing detailed bug reports is just as valuable as pull requests.

In the future of the OAS Lorna is looking forward to webhook support in version 3.1 and would like to see legacy parts of the spec dropped to make for a simpler interface.

Not your Uncle’s Auth: OAuth2.1 and Other Updates in Securing Your API — Vittorio Bertocci, Auth0

I wanted a bowl of pasta to go with Vittorio’s friendly Italian accent during this talk. He gave a fun talk using his video feed as a pointer on the slides.

You will need to watch the full presentation for details but at a high level the 2.1 spec is baking a lot of best practices for implementing OAuth into the spec instead of leaving it open for interpretation and they are dropping some parts that have been found insecure.

Based on his description I expect to see a lot more single-use refresh tokens in the future. I’ve already seen some instances where there is a failure between using the refresh token and storing it in the application that degrades the user experience since they have to go through the auth flow again. Hopefully, we find ways to minimize that.

Would you like some paging with that? — Glenn Block, Microsoft

Glenn is a member of GraphQL HTTP working group and gave us a demo of how paging should be handled. There are two standard approaches.

Offsets: where you specify a starting record number and number of records to return in this batch

Cursor: where you specify an opaque token called a cursor that references a specific point in a list of records and the number of records before or after that point you want to be returned

Offsets are easy to use but they do not scale for large data or handle concurrency well. They are best suited for display to users where only the first few pages of data will be viewed and missed/duplicated records is not an issue. They provide an easy way for clients to build their own links for jumping between pages of data.

Cursors are harder to implement as the server needs to create the opaque token that is used. It could be as simple as the ID of the last record in the batch but it could also be a more complex object that contains all the filters and sort descriptions needed to recreate the list and identify a point in it. The cursor value is usually base64 encoded and/or encrypted to keep clients from trying to guess them.

In the GraphQL world, they have largely adopted the GraphQL Cursor Connections Specification.

It adds 5 properties to each list of resources in the response

  • cursor — A string reference to the current point in some list of records
  • hasNextPage — A boolean indicating that there is more data after the current point
  • hasPreviousPage — A boolean indicating that there is more data before the current point
  • startCursor — A string reference to thefirst record in some list of records
  • endCursor — A string reference to the last record in some list of records

By using one of the 3 cursors we can build up the next request

In REST HATEOAS tells us to return full URIs for each of the links. If there is no more data before or after the current point we omit the prior/next links from the response. The assumption would be that you are always using the same page size. I like the idea of including these 5 values as additional metadata so that clients can easily change the page size.

Glenn provided several links in his slides for those that want more details

Open APIs Wide Open — David Biesack, Apiture

The Open API Specification has been open for extension by using custom properties that begin with x- since day 1. David showed us some examples of the power this brings to automate processes.

  • Define settings for auto generating SDKs
  • Store contact info for teams that manage each path
  • Define SLAs and compliance details (HIPAA, GDPR, country where data is stored, etc)
  • Store variables that can be linted and then used to generate consistently formatted documentation
  • Map properties to server object to automatically route requests
  • Tag a path/operation with “Traits” that it exhibits such as standard error responses, filter or paging params then let a transformer add the boilerplate details. (Where $ref does not work)

Tips

  • Run transforms on your source of truth OAS files to convert them into simplified OAS files that tools can consume
  • Transform custom properties into native OAS properties such as description in any published files. Strip out properties that are part of your internal build process for better security.
  • Define a strict schema for your extension properties and validate it using a tool such as Spectral or openapi CLI or OAS kit
  • Use objects instead of primitives for your extensions so that you can extend them further
  • Reuse well known extensions or namespace your properties (“x-yourname-”)
  • Remember custom extensions may or may not be supported in the tools you use and

With all the custom extensions there is some community interest in creating a registry.

I’m in the middle of building a brand new REST API gateway and want to have consistency between documentation, client code generation tools and the server parameter validation. I’ve been on the fence between writing a builder as part of my server that outputs multiple different OAS files vs loading an OAS file full of custom properties into my server to define validation rules in my middleware. Seeing all the linting and transformation tools that already exist convinced me that extension properties is the way to go.

Links

Managing API Specs at Scale — Jay Dreyer, Target Corporation

This talk dovetailed so nicely into the prior one that I missed noting the change in my notes.

Back in 2012 Target used apigee to manage their APIs before it was acquired by Google. They then went on to build their own custom set of tools based on open projects and heavy use of extension tags in OAS files.

Their SpecVet tool runs on OAS files as part of their build process. Output of the automated lint rules are automatically added to pull requests to reduce the human load in API Governance.

Links

Going AsyncAPI: The Good, The Bad, and The Awesome — Ben Gamble, Ably

AsyncAPI is effectively a fork of OpenAPI that has been designed for documenting Async communications instead of REST. Ben gave us an overview of the format and how ably.io uses it to describe all the publish/subscribe types messages they support. The output looks something like this. Ably is a SAAS service that handles transforming event messages between a surprisingly large number of service providers and protocols. At Vendasta we have built our own event broker tool but Ably is definitely something to check out if we ever consider an overhaul.

Did You Know You Could Use OpenAPI for Security? — Isabelle Mauny, 42Crunch

I missed the start of this talk as I initially attended one that amounted to a guy reading off slides that were a summary of a specification. I was hoping for more color around the use of that spec.

Isabelle talked about how our APIs should only accept things that have been explicitly allowed in an OAS file. That includes everything from paths to headers. It even includes the regexp to be run on every parameter or property in the body.

Last week I found this middleware that will take care of it based on your OAS file for golang servers.

She also recommends validating your responses to make sure you are not leaking internal details or error messages.

There are a large number of tools that will load an OAS file and then run fuzz tests on your servers or do other analysis for security holes.

Day 2

The second day of a conference is always better than the first. Perhaps it is because attendees are more settled in.

Keynote Panel: What’s the Specification for API Products?

- Mike Amundsen, amundsen.com, Inc.; Yina Arenas, Microsoft; Adam DuVander, EveryDeveloper; Gail Frederick, Salesforce; and moderated by Erik Wilde, Axway

The day started off with an admazing panel discussion on API governance at large companies. There was some discussion of reconvening the panel for a podcasts. I hope that happens.

There was a lot of anecdotal evidence to backup the recommendations in other sessions. Even with a full page of notes I can’t do them justice. You will just have to watch the recording when it comes out.

In discussing ways to improve API discovery they mentioned that machine readable files located in well known locations is one way that robots can pull data into an API search engine. apisjson.org is one such proposal. There are a few other existing RFC that could also be used. The OpenApi Directory is example that was shared in another talk. The country of Italy is also working on a national API catalog based on OAS3 specs.

One off or internal only APIs was also a discussion point. Putting in the extra 20% to make them consistent and documented up front paid off big time in the long run. It lead to some great quotes

“I have never understand why someone wants to give a bad internal experience to all of their engineers.” — Adam DuVander

“It’s only my family walking over this bridge so we don’t have to be careful how we construct it.” — Mike Amundsen

APIOpps is the next form of opps to be added to the IT space.

Links from the chat

Create Delightful SDKs from OpenAPI — Lorna Mitchell, Vonage

Lorna returned with another pre recorded talk with live commentary in the chat. Now that is how you fit twice the content into your allotted time.

She provided us with ideas for metrics to track in your APIs

  • Time to first API call
  • Should be measured from signing up an account to calling an endpoint outside of any “Try it Now” page
  • Number of downloads of your SDK or projects using your SDK reported by a dependency management tool
  • You can add custom headers to calls made by each of your SDKs to track the language and version
  • The amount of traffic on each endpoint
  • The same tools you use to analyze traffic in your UI can be use to analyze API calls. They work better than looking at raw logs.

There are a few different tools that will take an OAS file and generate language specific SDKs. Lorna believes that publishing the output of these tools without making any modifications provides very little value to your users. There are online tools that will let your users generate HTTP wrappers themselves.

Instead you should have a native language writer add a layer of helpers on top of any generated code. Remember to separate the code into packages so that you can regenerate the non helper parts.

OneOfs, optional or deeply nested content is hard to represent in strongly typed languages. Hand rolling helpers using the builder pattern is highly recommended.

There should be example code in the readme for the SDK in addition to anything that is on your website. “If your code example in docs looks too verbose, then SDK needs more wrappers”

It should be easy to do the right thing by accident. For things like webhooks that should be validated against a public key before use you should provide helper functions that do the validation so client’s don’t have to think about it. If you give them the raw generated SDK they may get code that appears to work but is unsafe.

Loran plugged another conference she is talking at apithedocs.org that looks to have some more great content on documenting APIs.

Links

Make Your OpenAPI and AsyncAPI Definitions Dynamic with Geneva — Stephen Mizell, SwaggerHub

Stephen demoed a tool he wrote called Geneva using an interactive presentation site. Geneva is a tool for making YAML programmable with support for multiple files, templates, variables and functions. It can be run as a CLI or in the browser.

This is one of several tools that could potentially be used to do the OAS file transformations talked about in prior talks. Dhall is a recommended alternative that is a little more feature complete.

The Vocabulary of APIs: Adaptive Linting for API Style Detection and Enforcement — Tim Burks, Google & Nicole Gizzo, Google

During her time as a summer intern at Google Nicole wrote a tool that analyzes the words used for properties and parameters in OAS files. It starts by counting how many times a word is used across the API. It then checks for synonyms and suggests changes to be more consistent.

When built into your build process it can report on new terms being added.

The tool is currently available as a plugin for the gnostic tool. Google has plans to continue developing it for their own use.

Tim also shared a link to a collection of best practices for improving APIs.

Get Rid of CRUD: Revealing Intent With Your API — Michael Brown, Microsoft

Michael introduced us to color based modeling as a step to use on top of Domain Driven Design. In it events are composed as the links between objects. In a relational database we would store the events in a flattening table between the two objects. In a REST API we can define each event as a unique resource model.

Making use of this example he showed us a pattern for adding commands to resources. He suggests a POST to the path of /speaker/{id}/commands/createPresentation could be used to create a presentation event resource that is linked between a speaker and a conference. The request body of the command contains the subset of properties needed to create a presentation resource.

The response would contain a command ID and status. If the status is not complete you can GET updated status using the ID. You could also build support for canceling commands using the ID and reporting user actions. That would work well with a workflow management tool like Cadence or its updated version Temporal.

For synchronous resource creation I could also see returning a 201 redirect to the newly created resource.

He suggests that the command pattern provides better auditability and tracking of events as well as a more clear user interface. I’m going to have to noodle on this. To me it feels like building RPC into REST because sparse PATCH operations are hard.

The {resourceID}/commands/{command} format does fit in well with the related resource pattern that I have seen elsewhere. In it you have {resourceID}/relationships/{relationshipType} for managing the links between resources and {resourceID}/{relationshipType} for returning a list of the related resource that has been filtered based on the {resourceID}.

Links

Don’t Make It Hard for Us! What Makes a “Good” API? — Matthew Adams & Carmel Eve, endjin

The TL;DR of this talk is there is no one answer. Just a bunch of things to consider.

Audience

  • Who uses it
  • Who maintains it
  • There are two types of users: “Explorers” who learn by trying it out and “Map Readers” who like to read the documentation and have guides to follow.

Time

  • When do clients use your API? Are they new to the platform or seasoned internal users? Do you have a SAAS service that people use when they don’t want to invest time in building their own platform?
  • Do you design a migration API the same as a reusable API?
  • An API that is released after you need it is bad
  • An API that is maintained after you no longer need it is bad

Technology

  • How is it being implemented
  • What impact does that have on the features you expose

Scope

  • One off or part of larger landscape
  • Providing simple hello world endpoints as part of your ecosystem is a great way to onboard clients. You should include public, authed and user restricted demo endpoints.
  • Endpoints often start either very specialized or very general. Over time people make changes until the endpoint becomes an ugly mess somewhere in the middle. Decide which end of the spectrum you prefer and then stay consistent.
    - One approach is to create low level REST endpoints that are general in nature and then create functions in your SDKs that are specialized
    - Another is adding specialized commands to general resources as Michael Brown proposed in his talk

API Specification and Microservices Communication Patterns with gRPC — Kasun Indrasiri, WSO2 & Danesh Kuruppu, WSO2

gRPC is the primary API style we use at Vendasta so I did not learn much from the talk but did have some interesting conversations in the chat.

We concluded that gRPC has some great features especially for high performance service to service communication but that you will almost always want some other communication method alongside it.

gRPC with protobuf is binary based. You generally need to run the .proto file through a generator for each language that a client will be written in to create the client side encoder. That is a major pain point for external clients who are not familiar with gRPC. If you want to avoid that pain point you need to maintain SDKs in every language your potential clients use. That is a lot more work then just providing an explore webpage over an OpenAPI file.

By its very definition gRPC is a way of running Remote Procedure Calls which makes using it feel like running a local function. That is a different design pattern than REST which is a way of representing the state of a resource in a generic fashion.

As a communication protocol it is finally starting to gain traction and more tools are popping up. At Vendasta we built our own http to gRPC transcoder however there is an open source one now that other tools are being built on top of using server reflection. Examples include a curl like tool and a few web based based API explorers with cool try it now form builders that are better than our Postman collections.

Final Thoughts

The strength of using open standards is it is possible for other people to build projects on top of them. There were many links shared during the conference to tools people have built to help in this growing space. That is also a weakness as they do not all share the same features and suffer from forks or multiple projects trying to do the exact same thing.

JSON Schema and OpenAPI are converging on a shared set of features. Hopefully other projects can do the same.

Over lunch on the second day I had a good chat with Stuart McGrigor. He maintains a fork of ReDoc. We talked about the problems with many different forks of RoDoc that are out there. In my opinion ReDoc.ly is not the best steward of the open source project because they discourage development of features that they want to have only in their commercial offering. We would like to see a mega fork to consolidate the efforts of the many different fork maintainers. Based on other comments at the conference it is one of the more popular viewers for OAS files.

In case you did not catch on from the many pages of notes I took, I learned a lot over these two days. I’m looking to putting it into practice over the next few months.

--

--

Richard Rance
Vendasta
Writer for

Developer and API Designer @Vendasta. All-round happy gopher.