The problem
I write a lot of code to test my API endpoints — especially because I am interested in the impact of everything that runs during the lifecycle of a request; how the route is interpreted, what middleware does to the request, how the business logic behaves, and what kind of response I get. Not to mention that sometimes I care about the impact of request headers, and the contents of response headers.
Documentation also quickly gets out of sync as the APIs evolve.
Meet Silk
Silk (silktest.org) lets you write your API documentation in Markdown, and then execute it as an automated test. If your API changes and your documentation is out-of-sync, the tests will fail.
We are looking for people to play around with the BETA of Silk, and let us know what you think. Pull Requests and feature ideas are very welcome, and are especially important at such an early phase of a project. Please get involved.
Tutorial
To explain how Silk works, we will write a simple markdown document describing a Hello World API, and then execute it with the `silk` tool.
This is the endpoint we are going to be testing: http://outlearn-hello.appspot.com/hello?name=YOUR_NAME_HERE.
The API we’re going to be testing was built as part of an Outlearn learning path about Google App Engine which is worth checking out if you’re interested in using Google App Engine with Go. If you do, you can run your Silk tests against your own app, rather than the one provided.
First, head over to the project homepage and download the latest release of Silk for your platform.
Gophers can just: go install github.com/matryer/silk
Create a new file called `hello.silk.md` and populate it with the following:
# Hello API## `GET /hello`Gets a personalised greeting.* `?name=Mat` // The name of the person to greet===* Status: `404````
Hello there
```
This simple file follows the Silk markdown pattern of not only describing a typical endpoint test in a human readable way (see pic below), but also in a way in which the Silk parser will understand.
Top level headings are descriptions, but second level headings (prefixed with ##) describe an HTTP request that will be made. In our case, a GET request to `/hello`. We’ll specify the scheme and host when we run the tests — which is extremely powerful as it lets us run our Silk tests against different endpoints (like localhost, test or live). The bulleted items (prefixed with *) let us specify URL parameters, and underneath the separator ===, assertions about the response. Finally, the code within the ``` backtics describes a body.
We deliberately got a few things wrong in our Silk document so we can see what failures look like.
Run the file using the silk command in a terminal, passing in the live end-point URL via the -silk.url flag, and the name of the file to run as the first argument:
$ silk -silk.url=”http://outlearn-hello.appspot.com" hello-api.silk.md
running 1 file(s)body expected:
```
Hello there
```actual:
```
Hello Mat.
```
— — FAIL: GET /hello
hello-api.silk.md:14 — body doesn’t match
— — FAIL: silk (0.31s)
FAIL
So it seems like we’re expecting “Hello there”, but actually the API is returning “Hello Mat.” Let’s update our documentation to match the API:
# Hello API## `GET /hello`Gets a personalised greeting.* `?name=Mat` // The name of the person to greet===* Status: `404````
Hello Mat.
```
The body between the ``` back tics has been updated at the bottom
Re-run the tests and notice that we get a different failure:
$ silk -silk.url=”http://outlearn-hello.appspot.com" hello-api.silk.mdrunning 1 file(s)
Status expected: 404 actual: 200
— — FAIL: GET /hello
hello-api.silk.md:11 — Status doesn’t match
— — FAIL: silk (1.24s)
FAIL
In our test, we indicate that we should get a 404 (Not found) status, but we are getting a 200 OK. Let’s fix that, and add an additional test:
# Hello API## `GET /hello`Gets a personalised greeting.* `?name=Mat` // The name of the person to greet===* Status: `200`
* Content-Type: `text/html; charset=utf-8````
Hello Mat.
```
Here we have changed the status to 200, and added a check on the `Content-Type` response header.
Run our tests one final time to see them pass:
$ silk -silk.url=”http://outlearn-hello.appspot.com" hello-api.silk.mdrunning 1 file(s)PASS
Conclusion
Silk lets you write human readable documentation, and run it as part of your automated test suite.
You can:
- Run an entire suite of Silk test files against any end-point, just by specifying a different silk.url flag
- Use regex to make assertions about the response
- Easily make assertions about JSON data
- Run tests programatically from Go code by importing the github.com/matryer/silk/runner package
- Browse the tests on GitHub, and they render nicely
- Use existing Markdown to PDF or HTML tools to generate deliverable docs for your customers
Let us know what you think by tweeting me.