Building a Faster MVP with GitHub Checks

Adam Hawkins
Jun 6, 2018 · 5 min read
Image for post
Image for post

I was already well on my way building the TeamCI MVP when their feature. The current MVP was functional enough for me. Regardless, I saw immediate upside in replacing my hand rolled UI with GitHub Checks API integration. That decision allowed me to offload heaps of responsibility so I could solely focus on functionality and not presentation. This post covers my rational, experience, and beta feedback.

About TeamCI

keeps shared for tools like rubocop, shellcheck, or eshint and applies that globally to all organization repos. Teams can also write their own tests. The interface is straight forward: view a report for each test (rubocop, shellcheck, etc) and the result.


The MVP’s goal is reaching private beta as quickly as possible. Creating UIs is the biggest time sink in this effort. I prefer to minimize that or avoid it completely.

Using the Checks API provides me (without heavy lifting on my end):

  • Report and outcome report UI
  • Line level annotations (which didn’t exist before)
  • Retry support
  • Read access to current and future organization repositories (this is a GitHub App feature) so no “flipping on” required
  • GitHub App install button, or how I don’t need to create a landing page, authorization, and all that stuff.

The biggest win for my MVP is that everything is API driven. Which means it’s more easily testable and maintainable for me. It’s also beneficial for users as are more integrations adopt a standard UI.

My Experience

The Checks API is still in beta at the time of this writing. I didn’t expect perfection, just good enough.

The API is well thought out. Specifically the 1 to N mapping of suites to individual check runs. Many CI systems are 1–1, but GitHub’s model allows more flexibility. This maps to TeamCI’s internal model so no impedance mismatch there. Triggering a test suite triggers 5 check runs at this point in time, but that number is dynamic. The Checks API handles this perfectly.

Image for post
Image for post
Check Run lifecycle from the official docs

Checks themselves are a state machine. Github’s model captures the async tests flow and displays them appropriately in the UI.

All-in-all the API behaves like it says on a tin. Here are my notes on areas to improve.

Integration Notes

The api started_at as optional. This is not the case. It's required depending on status. I got a 4xx on my first try. I’m sure other conditional properties are incorrectly documented.

Annotation require blob_href. I don’t see why this is required. The API knows the file since filename is required and the commit associated with the Check Run, thus the API may generate this value itself. This assumes that blob_href points to GitHub. If this is the case, then why allow a user provided value? Is it possible to provide a different commit than the one with the check run? If so that would be confusing. Is it possible to provide a blog_href that’s not on GitHub? The rationale is not documented so I’m left scratching my head.

Image for post
Confusing display of past & current status/conclusion

Retrying a job displays the previous conclusion and current status. I find this confusing. If a job/check/tests/etc is retried, the previous output and information is wiped from the current UI. The Checks UI shows the previous conclusion and output until the next check run concludes. My integration clears out output.text, output.title, and friends to workaround this, but the UI keeps the previous values. It seems there’s undocumented server side logic in play.

The output report (output.text in the API) is lacking. The API documentation says the parameter may contain Markdown. I fenced my output with bash (because every check is just a shell program) and hoped for the best. The best is just a monospaced font. I ended up replacing the bash annotation and sticking with a vanilla fenced code block. The Checks UI does not handle color escape sequences, so I strip them out before calling the API. The same goes for \r. I'm not sure what the expected behavior here is. My output contained \r and \r\n. That left copious new lines in the UI. I stripped them out as well. There’s no “live tailing” either. I gather this is because it’s a generic text block and not an explicitly an “output log”. All this is fine enough for me. Hopefully this feature improves in the future.

Image for post

output.annotations does not support Markdown. I want to put a link somewhere in the annotation. rubocop,shellcheck, and other tools have URLs associated with style violations. I have to settle for dumping the entire URL into message. This is sub-par for users since they must copy and paste instead of clicking. Introducing a new field into output.annotations would solve this problem for me.

The Future

Everything I need!
TeamCI fully integrated with GitHub Checks. Everything I need!

Building TeamCI against the GitHub Checks API offloaded so much responsibility from my end. In fact, I haven’t written a single line of UI code to build the MVP. I guessed the Checks API would a minimize work in this area, but it turned out to eliminate it completely.

I’m completely satisfied with my decision. I can’t think of any blockers. I highly recommend this route if you’re building a new CI type product.

P.S. TeamCI is coming to private beta soon! Join the to keep up-to-date.

Work in Progress

Work in Progress

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store