Building a documentation pipeline
On my most recent client project at AND Digital, one of the objectives of my team was to help introduce some best practices in relation to the client’s API documentation.
The client’s current API documentation was written in long, rather uninspiring Word documents stored in a Sharepoint folder. When the client on-boarded a new external partner who would need to access their APIs and required the documentation, they would need to be set up with a Sharepoint account and have the documentation folder shared with them. Not really an ideal experience for the partners.
While investigating good examples of API documentation from other companies (SkyScanner, Zoom, Monzo, etc.), I noticed a similar layout popping up again and again…

These sites all use a tool called Slate to generate a static documentation site from Markdown files. The interface is clean, navigation is clear and out-of-the-box features like syntax-highlighted code examples are included.
Exploring Slate and the tooling around it, I came across Widdershins, a tool that can create quite detailed Slate-compatible Markdown documentation with code examples from Swagger API spec files. This got me thinking about how we could create a documentation pipeline for our client; taking in Swagger specs for the APIs and some additional Markdown files and producing a cloud-hosted static documentation site on the other end.

We proposed this to the client and they loved the clean look of the Slate site and were happy to proceed with the plan for the documentation pipeline, but they had some security concerns about making their API documentation public so one of the requirements for hosting the Slate site in the cloud was for it to be protected behind Basic (HTTP) Auth.

One option for hosting, suggested to me by one of our tech coaches at AND Digital, was to use a tool called Surge which makes it super-easy to publish content to the web. Basic hosting is free but our requirement for HTTP authentication would have meant signing up for a $30/month service. Not bad, but I wondered what other options we had.
My first thought when I knew I needed to host a static site was to use an AWS S3 bucket. Speaking to our other AND Digital teams that were working with the same client I learnt we had recently set up an AWS account for the client’s new development projects. Hosting their documentation site there made sense, but how to implement Basic Auth on the S3 bucket?
After a bit of Googling and digging around a few articles on the web, I’d found some interesting suggestions and come up with a plan. I could place the files for the static site in a private S3 bucket, set up a CloudFront endpoint pointing at the bucket, and have this trigger an AWS Lambda function on requests, where I could implement Basic Auth for access like so:
The infrastructure in AWS is essentially connected up as follows:

With this new setup, the client’s documentation is held in the form of Swagger specs and Markdown files, version-controlled and stored in a repository in GitLab. A simple CI pipeline has been created with build and deploy stages, which generate the static files for the documentation site, and use an IAM user (which suitable permissions) to push these to the private S3 bucket and clear the CloudFront cache so that changes can be seen immediately.

The documentation pipeline works a treat and can publish a new version of the static site within about 3 minutes from a pull request being merged to master, everything is secure behind Basic Auth, and the client loves the look of their new documentation, which we’re now continuing to improve and build out.
This project offered me the opportunity to build my first Gitlab CI pipeline and use AWS Lambda functions for the first time. Both were simple to understand and pick up with lots of great documentation out there on the web. Hopefully, not too long from now, my client’s partners will be able to say the same thing about their API docs!