The Ultimate API Publisher’s Guide

Built on the shoulders of over 5 million developers, let’s review some best practices for documenting your API.

Joyce Lin
Better Practices
29 min readAug 30, 2018

--

Photo by Glen Noble on Unsplash

Part 1: first, a word about documentation

Part 2: building awareness of your API

Part 3: gaining adoption of your API

Part 4: maintaining your API documentation

Part 5: a recipe for generating API documentation with Swagger and Postman

Part 1: first, a word about documentation

Who are you calling an API publisher? If your API is being used by others, then you’re an API publisher whether you’ve documented it or not. With the rapid growth of APIs, it’s likely that if you’re not already an API publisher, you will be someday soon.

There’s a few different types of APIs to consider when talking about generating API documentation.

Public APIs

These are the APIs available to the broader developer community. When you’re documenting public APIs, you might be encouraging customers to learn and use your technology. Perhaps you’re a technical writer in charge of API documentation. Or perhaps you’re a developer evangelist prototyping an integration that will be used by independent developers.

Postman 2017 API survey

Partner APIs

These APIs might be shared as web services with clients or you might be the client consuming the service. For partner APIs, or private 3rd party APIs, you might be a solutions engineer documenting how an integration flow will work and allowing a prospect to visualize the steps more clearly. Or perhaps you’re a sales engineer creating a custom API for a client and want to train the client’s development team on your technology’s functionality.

Private APIs

These APIs are for services that you share internally within your team. According to last year’s Postman Annual API Survey, developers spend almost ⅔ of their time working with the last type of APIs, the private ones used internally within your organization. You might be providing documentation for team members who are helping to develop the API. Perhaps you’re creating onboarding materials to introduce the project to new collaborators. Or perhaps you’re a support engineer providing documentation for your team so you can reproduce customer issues more easily.

For any and all of these scenarios, documentation is important.

So why is API documentation important?

Postman 2017 API survey

According to the Postman survey, most developers spend more than 10 hours every week working on APIs, with more than half of that group spending in excess of 20 hours every week. That’s a lot of time invested in learning, using, and referencing APIs.

With all the time developers spend working with APIs, the documentation can make or break the developer experience.

Let’s consider the perspective of a developer who is interested in adopting a new service.

  1. How do you bring awareness of your API to the developer community or to the rest of your organization?
  2. How do you influence their decision to adopt your API?
  3. How do you maintain the documentation so your developers can continue using your API?

Part 2: building awareness of your API

What are all the ways in which developers find out that your API exists? Business people call these marketing channels. These are all the ways to reach your audience and let them know what you’re all about.

For public APIs, anything that increases your likelihood of discovery is good.

  • Focus on your search engine optimization (SEO) to increase the ranking of your website, developer portal, or API documentation
  • Test the effectiveness of paid advertisements in relevant digital contexts for your target audience
  • Participate in community events and hackathons to introduce your API to new users
  • Join the discussion about relevant issues on support platforms like Stack Overflow, community forums, and social media
  • Leverage internal advocates and external API evangelists to share stories of how they use your API
  • Partner with well-known technologies that already have a large community by building integrations and tutorials
  • Submit your API to directories like the Postman API Network or ProgrammableWeb so that it’s easier for people who are looking for specific APIs to find them
  • And the list goes on

Judgy bunch of devs

Developers hold strong opinions about APIs and technology that they may have never even tried. Where do these judgments and assumptions come from?

For developers in particular, word of mouth holds a lot of weight in the developer community and the reputation of a technology may precede itself. Online reputation and chatter bubbles up in social media, on forums, and through personal referrals.

Say your API is brand new and hasn’t yet established any street cred. Documentation will likely factor into first impressions. When a developer is tasked with discovery and completing the initial research, the documentation may be one of the first quick scans to suss out the functionality, ease of implementation, and compatibility with their existing tooling.

Some APIs are so scantily documented that developers must rely on cryptic server responses to understand what’s going on. Other APIs are documented with such clarity and detail, that you can’t help but think the underlying technology must be equally elegant and easy to use.

Engineering departments who use Postman’s workspaces to organize their work have cited the ability to rely on Postman as a single source of truth for their APIs as a real benefit. Without a clear tool or workflow to keep tabs on your APIs, it’s not unheard of for teams to be working on redundant projects without knowing what other members of their team are working on.

Workspaces to organize stuff by functional team, product, or project

For these private APIs, documentation is useful for socializing the API design and functionality. Just like with public APIs, good documentation gives your private API street cred that the underlying tech is sound and allows your collaborators and stakeholders to visualize the potential.

Part 3: gaining adoption of your API

API documentation is commonly cited as the most important factor in choosing an API. Once a developer has finished “kicking the tires” and decided to use your API, or at least try it out, they flip into learning mode. If they haven’t looked at the API documentation yet, they will now.

According to the Postman survey, developers thought their API documentation was not as well-documented as expected.

So what makes for better documentation? Over 45% of respondents cited better examples, real-world use-cases, and sample code as a way to improve API documentation.

Postman 2017 API survey: what would improve API documentation?

If a developer can get up and running with your API quickly, then the chances of them looking for an alternative drops significantly.

Here are some ways to smooth the road to adoption:

Guide the onboarding 🏁

This is the single factor that can most significantly impact your adoption. It’s tempting to tell your new user about every bit of functionality your API offers, all the potential use cases, and to convince them that they’re making a good decision to use your API. It takes greater skill to abstract the onboarding flow to introduce only the pertinent details about your API and show them a quick win.

Explain authentication 🔐

There’s a bunch of ways to authenticate your API, and some APIs provide multiple ways to authenticate depending on the usage. Authentication is one of the first barriers to entry for your API, and it’s frequently one of the biggest. A user might be new to this particular endpoint, this auth method, or even APIs in general. It helpful to new users for the documentation to explicitly walk through the authentication process.

Interactive demos 🖐️

Some developer portals include a built-in playground to run requests so developers can interact with the API. One option is to provide a sandbox environment populated with test data so a user’s personal credentials are not required for the demonstration. Another option is to generate user credentials and provide access to the user’s actual data so the interaction is more personally meaningful.

Quickstarts and real life examples 🎯

Besides the initial onboarding, it’s helpful to have tutorials or guides as a quick way for new users to get started with various scenarios. Reducing this early stage friction is equally important to building out the core functionality of the API. It’s much easier to maintain your users’ attention once they have experienced an initial success and can see evidence of the value your API brings to them.

Sample code 💻

If your API doesn’t offer any client libraries, providing a variety of code samples is helpful for users to troubleshoot their own implementation according to their chosen language or framework. Code samples can shed more light on what’s happening under the hood. These samples can then be modified to suit their particular use case.

Organize and reference 📚

Once your users create a working prototype and expand the basic functionality, they will inevitably run into an issue. If they can successfully troubleshoot an issue by searching through your crisp and structured API reference, then you’re in good shape.

Enabling a developer to get up and running quickly and then empowering them to solve their own issues is critical to a good developer experience. This has a direct impact on initial adoption, continued usage, and willingness to refer your API to their peers.

Part 4: maintaining your API documentation

So let’s assume you’ve documented your API. Now what? How do you keep your API docs up to date and accurate?

The documentation should be the most accurate reflection of how your API is expected to function. When the product changes, the documentation must be updated, but it can be challenging to keep API documentation on the same page as the API.

Definition of done

It’s not ideal, but it’s a common practice for documentation updates to lag behind product updates. Either the documentation is completed as an afterthought, or even worse, the documentation is only completed once someone has a problem with it either internally or externally.

One way to be proactive is to make documentation a required step of the deployment.

The documentation is a part of your API.

Whoever is tasked with determining and assessing the team’s definition of done will require adequate documentation before the product can be shipped. This can be handled by your project tracking system with an assigned owner, review process, and deadline. Additionally, this can be reinforced as a part of the user acceptance testing (UAT). If testers are unable to accomplish certain tasks after interacting with your API, the documentation should be fortified.

Versioning

When the product development proceeds quickly, it helps to have a process to version the API as well as the corresponding documentation. You can use existing configuration tools or a manual process to keep things organized.

There are several ways to version your APIs. So how should the corresponding API documentation be versioned?

For minor or patch versions, differences can be called out within the same documentation. For major version differences, it’s likely that documentation for both versions will need to be maintained for at least some interim until the earlier version is deprecated. Users should be clearly informed that there’s a newer version of the docs, so they can easily navigate to the latest and so they won’t be surprised when and if you finally decide to deprecate the earlier version.

Teams with multiple versions of an API have handled this a couple different ways* using Postman collections. BetterCloud created separate collections to reference historical versions of their private APIs. Square included their v1 reference as a separate folder within their publicly available collection.

*Note: the ability to fork a version of your collection, complete a peer review, and then merge is coming soon to Postman.

Continuously improving your API documentation

It’s one thing to make sure you have something that your users can reference, but how do you continue improving the documentation and make it more robust? In an ideal world, the continuous improvement of your API documentation goes hand in hand with maintaining your documentation.

Curse of knowledge: a cognitive bias that occurs when someone unknowingly assumes that others share the same basis of knowledge.

Ever hear about the curse of knowledge? This behavior is evident when a new team member with no shared context hears your team speaking with a slew of acronyms and company-specific terminology. Think about the terminology that your team uses that might alienate newcomers.

The more knowledgeable someone becomes about a topic, the more cognitive effort it takes for them to explain it to a newcomer. In fact, this frequently requires an explicit step to put yourself in the shoes of a new user and imagine what they know or don’t yet know.

With technology in general, there are so many new tech workers who might have limited experience in the space and can benefit from clear and simple language. With the breakneck growth of APIs in particular, making it easy to consume your API is a market differentiator.

So what does the curse of knowledge mean for someone writing API docs? First and foremost, think about your user.

  • Will a new user be able to get started quickly with a hello world? Once they do, is there a clear path for them to continue learning?
  • If someone lands on a specific page within your documentation, will they be able to understand everything? If not, will they be able to find a reference or more resources in that context?
  • If someone has specific issue that they’re dealing with, will they be able to find documentation that will shed more light on their use case? If not, is there an accessible way for them to seek additional resources?

The idea is not to be redundant, or overly verbose. Instead, introduce new concepts and terminology for any of these user scenarios. Provide inline descriptions or hyperlinks to a definition page if you’re introducing a new concept within the local context of their experience.

Listen to feedback from your team members

Frequently, the people tasked with writing API documentation are ones with broad knowledge about the API. This might be the developer that is most suited to understanding the underlying technologies. This might be a technical writer who is well versed in the INs and OUTs of the product.

While it’s logical for the person or team who is most familiar with the API to also document the API, the curse of knowledge reminds us that it might be more challenging for them to communicate their understanding to others.

When interns or other new people join your team, their feedback is invaluable since it’s rare that you’ll ever be able to fully put on your new user hat in a way that they can do. Another valuable reviewer is people in surrounding functions who already have an abstract understanding of the API, but may not be well-versed in how it operates under the hood. Their fresh perspective will point out when you’re using insider jargon that is incomprehensible to the average user.

Listen to feedback from your users

Think of a time you started poking around in the docs and got lost or overwhelmed. Now think about the last time you came across a typo or inaccuracy in API docs.

Chances are that you stewed on it, but never provided any feedback to the authors of the documentation.

As an API publisher, make it easy for users who are willing to provide feedback to do so. And then listen to the feedback!

Docker offers an example of open-sourcing their documentation on GitHub so that anyone in their community can edit the docs by forking the repository and submitting a pull request. You can also request a docs change by submitting an issue.

PHP offers another example of technical documentation that includes a section for user contributed notes at the bottom of every page. If you’re reading something in the docs that doesn’t quite make sense, you can ask questions or add your comments directly on that page.

For both the Docker and PHP docs, they have made it easy to provide feedback at the time you’re reading through and referencing the docs. It’s relevant, it’s easy, and you’re more likely to do it.

Look at important metrics

For other product feedback, you might be able to look at your metrics, hold focus groups or usability tests, or do market research to get the feedback and validation that you’re looking for. For documentation, it’s not so straightforward.

If your API documentation is subpar, you might experience lower adoption and usage. But lower than what? Hard to tell.

For web-based API documentation, there are a number of web metrics that can provide insight into optimizing your documentation.

  • Most viewed pages
  • Most clicked hyperlinks
  • User journey from a typical landing page
  • Most searched terms
  • Search terms returning zero results
  • Common referral sources

You can look at trends over time, directional changes after an update, or try A/B testing content, style, and formatting.

Beyond direct documentation metrics, frequently asked questions provide a qualitative and quantitative means of addressing pain points. Tag and identify the top issues from your support ticket platform, forum, bug tracker, or even from face-to-face discussions.

Can these issues be solved more easily with some documentation? If your support team continually answers questions without a resource to link to, this content should be prioritized in the queue.

Can these issues be solved with better documentation? If your support team continues to receive questions about something that’s already been documented, this could be attributed to a few reasons.

  • Unidentified gotchas: the API itself might be exhibiting unexpected behavior, and a useful error message can guide the user to the correct solution. If updating the API is not a viable solution, you should call it out and document the accepted solution.
  • Counter-intuitive search and navigation: the documentation might be hidden to the user because they’re expecting documentation to be associated with a different concept or workflow or they don’t know how to refer to the issue according to the company-specific terminology. This is another example where inline definitions and cross-referencing hyperlinks would help.
  • More clues and context: the documentation alone may not be sufficient, and a step-by-step tutorial or code samples will provide additional clues and context to implement a solution. Providing examples from different perspectives can shed light on a user’s particular use case.

This type of feedback will identify edge cases, common gotchas, and inform what needs more clarification in the documentation.

Keeping your documentation up to date in Postman

We talked about tips for maintaining your documentation, and why you should do it. Now let’s dig a bit deeper into how to keep your documentation in Postman.

First of all, there are several ways to create documentation in Postman.

  • Automatically generate a web view
  • Embed a Run in Postman button
  • Share a collection link
  • Share a JSON file

While the last three options are not officially “documentation”, people still use them to fulfill the purposes of documenting their APIs for internal and external audiences, so let’s include them in the discussion.

Let’s start with automatically generating documentation for your APIs. Postman will generate and host web-viewable documentation based on the metadata in your Postman collection. This documentation can be viewed in a browser, accessed privately within your Postman team or publicly if you choose to publish it.

If you plan on making changes to the API, Postman syncs your updates in real time. Any changes that you save to the underlying collection will be reflected instantaneously in the documentation on the web.

Documentation generated by Postman

The documentation webpage includes a default Run in Postman button at the top that allows users to download a copy of the underlying collection to their instance of Postman. Your users can start interacting with your API right away in the Postman app.

Clicking the Run in Postman button downloads a copy of the collection.

If you published your collection(s) in the Postman API Network, the button is refreshed whenever you save changes to the underlying collection. You don’t need to worry about keeping your button updated since that step happens automatically. However, anybody that has previously downloaded the collection will be working off the version they downloaded.

Importing a collection from the API Network

Another option for sharing API documentation is to create a stand-alone Run in Postman button. Some publishers will embed the button in a blog post or in the README file of a repository. Once again, users will work off the version of the collection they download. Notice there’s a different process to update the underlying collection. API publishers must manually refresh the collection button, and then users can download the latest collection.

The same rule applies if you’re using a collection link to send to a co-worker or collaborator. Users will work off the collection they import at that point in time. Updating the underlying collection requires the person sharing the collection to manually refresh the collection link, and then all users can import the latest collection to work off that version.

The last option for sharing API documentation is to share a physical file. In the case of Postman, you can export a JSON file of the Postman collection from the Postman app. Although frequently used, this is the least attractive option if you’re in the process of developing an API and changes are inevitable. In this scenario, version control is cumbersome. To maintain any changes in a collaborative scenario, you will need to pair this with some other version control system such as checking the file into git.

In a different scenario, if you’re documenting a transient use case perhaps for debugging, it’s very easy to send over a collection link or physical file to reproduce the issue for a colleague.

With all these options for sharing your collection’s documentation, you may be wondering which one to use. That’s up to you, but some thoughts for your consideration.

  • Maintainability
  • Describability
  • Accessibility
  • Discoverability

Maintainability: If your collection is in a state of development, it’s likely to change and people may be providing feedback. In this case, ensuring that everyone is reviewing the same version is important. On the other hand, perhaps your collection is pretty well-baked, changes are unlikely, and you just want to allow people to reference it. Keeping track of the latest version becomes less important.

Describability: If this is an internal collection and most collaborators already have a handle on how the API works and functions, then you may not need to fully explain and describe what’s going on. It’s a gamble, but some people are in this lucky boat. If you’re in an organization with new team members, partners, or external consumers, then teaching them how to use the API is necessary.

Accessibility: Controlling how other people access your Postman collection is fully within your control. Permissions for a web-viewable collection can be limited to the individual, the team, or opened up to the broader public. Access to a collection via the Run in Postman button, collection link, or JSON file is based on whom you share it with. If you email someone a collection link, and they forward the email on to someone else, anyone with the collection link can access your collection.

Sometimes your API documentation is used by non-technical team members, or those who might not be Postman users. Web-browsable documentation can be published so that anyone with an internet connection can access and reference it.

Discoverability: Along the same lines as allowing your users to access the documentation, allowing your users to discover your documentation is also important. For publishers who want their API to be discovered by external consumers, there’s an option for Postman users publishing their documentation to submit their API to the Postman API network. This allows other Postman users to search for and import a collection into their local instance of Postman.

This doesn’t mean the other options are not discoverable, however the ability for others to discover your API is not inherent in the mechanism for documentation. With these options, discoverability depends on how and where you share your collection, like embedding a stand-alone Run in Postman button on a tutorial located on your developer portal.

Assess how well each option suits your needs

Part 5: a recipe for generating API documentation with Swagger and Postman

Millions of developers, technical writers, and product owners have already discovered how to publish beautiful, web-viewable API documentation with Postman.

In other blog posts, we’ve talked about what makes for a good collection and how to document an API using Postman. The ease of generating documentation has made it one of the most popular features among Postman users.

As an API publisher, you can use Postman to:

  • Describe URL, method, headers, payload, etc.
  • Indicate required and optional variables
  • Explain authorization and authentication
  • Demonstrate examples of a request and response

As an API consumer, your can use Postman to:

  • Reference the API documentation in a web-viewable format
  • Generate code snippets to paste into your own code
  • Download an executable description of the API into Postman

Postman is an easy solution for those who are currently shopping for a documentation platform for their public, partner, and private APIs.

If you’re tied to legacy platforms and tools to generate specifications for your API, Postman plays well with others. It’s easy to create a workflow that suits your preferred tooling.

Most users still choose to import requests into Postman from cURL. However, Swagger is the most popular format among the users importing an API description format.

Swagger is the most popular API description format imported into Postman

The OpenAPI specification (previously called Swagger) is an API description format for REST APIs. The OpenAPI specification serves as a contract to drive the API development, and the specification document is machine-readable.

At some point, a technical writer will need to layer on more details to the API documentation so that humans can understand the documentation. For the OpenAPI specification, this enrichment is frequently added as Swagger annotations.

Prepare the data for Postman

Many people choose to begin the API design step directly in Postman by describing all the elements of an API, like the path, headers, or payload. From there, the request or collection can serve as a starting point for discussion and help internal teams document their API as they’re building it.

If you’re already using Postman to design your API, you can skip the rest of this article and proceed directly to documenting your API. Any changes you make to the collection will be reflected in real time on the web documentation.

If you’re using an API description format like OpenAPI, RAML, or API Blueprint, there’s one more step required to prepare the data for Postman.

In this example, we’ll consider the OpenAPI specification version 2.0 (Swagger 2.0).

As we covered, you can describe and generate API documentation within the Postman app. However, if you already have this information in a Swagger file and would like to preserve your annotations while importing the data into Postman to automate your testing, generate your documentation, or whatever else, here’s how to do it.

What’s the difference between the Swagger format and Postman format?

Some teams use Swagger as their API description format to specify the design and enforce the development of their APIs. The Swagger format describes your API in a manner which adheres to the OpenAPI Specification (OAS).The Swagger file type is JSON or YAML.

Everything in Postman is backed by a Postman collection. The collection is also the foundation for a lot of the advanced functionality within Postman, and can be represented as a JSON file.

How does Swagger work with Postman?

For an ad hoc use case, you can import your swagger file directly into the Postman app. The current in-app import feature converts Swagger v1 and v2 to Postman v2.

For an automated solution, you can use your favorite converter to sync your specs in a script. In a little bit, we’ll see how to expand this automation.

When I convert Swagger to Postman, I’m missing X. Please fix it.

Some of the current methods to bring Swagger into Postman result in a certain loss of fidelity. Depending on your interpretation and use case, this conversion will not result in a perfect translation.

The process of converting one format to another is somewhat subjective. As we will soon see, how one team uses and interprets one property might not be how another team does. Even if your scenario is special, you can replicate this example incorporating your own tweaks to build a custom converter that suits your needs.

Actually preparing the data for Postman — get on with it

Let’s assume we already have a Swagger file, and would like to preserve the metadata and annotations while transferring the data over to Postman. We might be using Postman for testing automation, generating API documentation, or anything else.

Before we begin, make sure you have Node.js and a package manager like npm installed on your machine. This example uses Node, but you could use your favorite scripting setup.

Step #1: GET the Postman collection using the Postman API.

The Postman API is an easy way to access and update your Postman data programmatically. Let’s start by using the Postman app to make a request to the Postman API to GET all of our collections. You will need your Postman API key to access your Postman data.

In the response, search for your collection to identify the collection_uid. We’ll need this value, so hang on to it.

Postman API to GET all collections

If you’d like, you can also GET a single collection using your collection_uid to inspect the response. You can see the JSON representation of your collection.

Postman API to GET a single collection

Step #2: Pick a converter.

If the converter that you’re using gets the job done perfectly, go ahead and sync your specs. If it does most of what you want to do, but not everything, then we’ll have to tweak the code.

In this example, let’s start with an open-source project from Postman called swagger2-postman2-converter. This tool converts the Swagger 2.0 format to Postman 2.0. It translates and preserves schema elements like folders, sample request bodies, responses, and authentication helpers.

Once you’ve selected the converter that you’d like to work with, you can either write a few extra steps to execute after using an existing converter or tweak the converter itself.

Let’s try the second option. We will fork an existing converter and add a few steps within the converter to handle our customizations.

Step #3: Update the converter’s code to do exactly what you’d like it to do.

In this example, we have some Swagger annotations that we would like to be represented in the Postman collection.

How about something like this?

Plan to convert Swagger annotations to Postman collection elements

Let’s take a peek at the code to see how our converter works. Looking at a file called convert.js, the converter takes in the Swagger format, initializes a Postman Collection, and then begins traversing the Swagger object and translating it into the collection’s elements. Some of this code uses the Postman Collection SDK to create and update the collection. The code in this file also relies on helper functions created in a file called helpers.js.

For the first update, let’s update convert.js since this is a collection-level change. The converter already translates the @Info annotation’s description into the Collection description by passing this content into the describe() method on the Collection object. Let’s update the content being passed through by adding the contact name and email.

Pass through contact name and email

Now let’s do the same for the @License annotation. If we want to use markdown syntax, add type/markdown as a second parameter to pass into the describe() method.

Pass through license info

Now let’s make sure our Swagger parameter descriptions are displaying where we want them in our Postman Collection. For that, let’s look at helpers.js and find the spot that translates the params in the URL, body, and headers.

The converter already translates the @Parameter annotation’s keys and values. Let’s add the parameter description too.

Add parameter descriptions

At this point, we can continue to refine the conversion.

Maybe you disagree that the Swagger @Operation annotation should be reflected as the request description in Postman, and think it would be more useful as the request name instead.

Maybe you also have Swagger examples that you’d like to save as Postman examples. Or maybe you think your Swagger examples should be saved as part of the request description in Postman instead.

This process is subjective and the conversion preferences you settle on will ultimately be determined by how your team uses Postman.

Once you have everything converting in the manner that you like, it’s time to update the Postman collection.

Step #4: Update our collection using the Postman API.

Let’s use the Postman API again for a PUT to update the collection the same way we did in Step #1.

Once you’ve got it working in the Postman app, click the Code link near the blue Send button to generate a code snippet. Select the framework you’re working in, like NodeJS Request, and copy the code to your clipboard.

Generate a code snippet to paste into your script

We can paste this snippet directly into our script. You might also decide to handle your API key as an environment variable, or reorganize the code a bit to suit your existing code.

Here’s an example of what a script might look like, or check out our example code forked from our original converter.

Convert Swagger to Postman format, and then update an existing Postman collection

Trigger the update

At this point, we can use our converter locally by plugging in the required variables and then running our script like node script.js.

If we want to automate this process, there are a number of ways to trigger an update to the Postman collection such as using IFTTT, Microsoft Flow, or another integrations platform. Of course, you can set up your own private server to be listening for a webhook request, or you can use a framework like Serverless to do something similar.

For this example, we’ll set up a GitHub webhook for the repository where our Swagger file resides. This webhook will listen for any commits or deployments on our repo. When the Swagger file is updated, the GitHub webhook will alert AWS Simple Notification System (SNS) which in turn invokes AWS Lambda to update our Postman collection and respective web documentation.

Auto-publish API documentation with Postman

Phew! That sounds like a lot. However, once we complete the setup, all of this will happen automatically anytime the Swagger file is updated.

Note: we will be following along with the first few steps from this Amazon Lambda tutorial. The steps outlined below are a little more recent and up-to-date.

Step #1: Create an SNS topic

Amazon Simple Notification Service (SNS) is a managed pub/sub messaging system. We are going to use Amazon SNS to be the “middleman” between GitHub and Lambda to serve as the trigger for our Lambda function. In other words, GitHub will publish event notifications to the SNS topic that you create, and SNS will then invoke your Lambda function.

From the AWS Simple Notification Service (SNS) console, click “Create topic”.

Create topic

Fill in the name and display name fields with whatever you’d like, then click “Create topic”.

Name and display name

Make a note of your Amazon Resource Name (ARN) for our next step.

lambda Amazon Resource Name (ARN)

Step #2: Create an IAM User to Publish As

Let’s create a new IAM user to represent the GitHub publishing process. Then we can create a policy ensuring that this user is only able to publish to the topic we created in the previous step.

From the Amazon IAM console, select “Users”, and then click “Add user”.

Add user

Fill in the name for the GitHub publisher user, make sure the “Programmatic access” box is checked so an access key can be used, and click “Next”.

Programmatic access for the user

Select “Attach existing policies directly”, click the “Create policy” button that appears.

Attach existing policies directly

On the new Create Policy page, tab over to “JSON”, and edit the sample JSON like below. This is where you will need the topic ARN from Step 1. Click “Review policy”.

lambda create policy

Fill in the policy name, and click “Create policy”.

lambda review policy

For the new user in the Amazon IAM console, make a note of the key and secret under the “Security credentials” tab. You will need this for the next step.

lambda user credentials
lambda key and secret

Step #3: Set up the GitHub webhook

Let’s set up a webhook service so that GitHub actions will publish to your SNS topic. From your GitHub repository, tab over to Settings, and select “Integrations & Settings” in the sidebar. Under the Add Service dropdown, select Amazon SNS.

Github service for Amazon SNS

Fill out the required fields with the IAM user credentials from Step 2.

Add IAM user credentials from previous step

Now GitHub actions will publish to your SNS topic. Next let’s do something when SNS is notified.

Step #4: Create a Lambda function

Amazon Lambda is a serverless computing platform that allows you to create and run a function without provisioning your own servers. It’s a Function as a service (FaaS) hosted on Amazon Web Services where you pay server fees only for the time that your function runs. Let’s set up a basic Lambda function subscribed to the SNS topic, listening to GitHub event messages.

From the Amazon Lambda console, click on “Create function”.

Create a lambda function

Select Blueprints, filter by sns-message, select the template, and click “Configure”.

Create a lambda function from a blueprint template

Give your function a name, select the lambda-basic-execution-role, select the SNS topic you created in Step 1, and confirm “Create function”.

Create the lambda function

Step #5: Test the setup

From the Amazon Lambda console, click on your new function, and then click Test.

Test the function

In the Configure test event modal, make sure SNS is selected as the event template, provide an event name, and confirm “Create”.

Configure the test event

Back on your function page, ensure your new event name is selected in the dropdown, and once again click Test. You should now see the result returned by your function execution, along with any associated logs.

View results of the lambda test run

From the AWS Simple Notification Service (SNS) console, select your GitHub publication topic. Under the “Other topic actions” dropdown, select “Delivery status”. Complete the wizard to set up CloudWatch Logs delivery confirmations. Press the “Publish to topic” button to send a test message to your topic (and from there to your Lambda function).

lambda publish to topic

Logs written from your Lambda applications will be sent to the Amazon CloudWatch console. Under Logs, you can review invocations of your function, confirmation of the delivery, and events.

Review logs in CloudWatch

Step #6: Update the Lambda function

Now let’s swap out our “Hello world” Lambda function with our custom converter code.

You can write your function directly from the AWS console in the “Function code” editor. Alternatively, you can write your function from your preferred integrated development environment (IDE). Node.js is a scripting language, so we can copy and paste this code directly in the AWS console.

Let’s restructure the code from script.js in such a way so that the function is called from within the Lambda handler.

Create a file called scriptForLambda.js which will contain some function definitions and a new file called index.js which will require scriptForLambda.js and contain your event handler to invoke your custom functions.

You can also create environment variables for Lambda functions to manage your sensitive information like personal API keys, and then access the variables like process.env.postmanAPIKey.

In this example, we’ll create a Lambda function deployment package, a .zip file consisting of your code and any dependencies. Add a script to your package.json to zip the required project files. Run npm run zip from the command line.

Zip required project files

Upload the resulting zipped file to Lambda.

Upload zip file to Lambda

Once again, hit the Test button at the top to make sure everything is running properly.

🙌 And there you have it! Take a look at the repo for a working example.

A final thought on publishing API documentation

If your API is being used by others, then you’re an API publisher whether you’ve documented it or not. Publishing API documentation can mean sharing documentation with a select group of customers, privately within your team, or with the broader community.

The documentation is a part of your API.

Tools like Postman can help internal teams document their API as they’re building it, as well as help others consume their API once it’s developed and deployed.

So go forth, document, and make the developer experience great again!

--

--