Encouraging Development Best Practices with Gamification

Ben Limmer
May 23, 2018 · 7 min read

Over the last twelve months, the Technology department at Ibotta grew by over 100 people. Along with this growth, we’ve started breaking down our monolithic Ruby on Rails application into a series of microservices, which caused a rapid increase in the number of GitHub projects, languages and frameworks our squads use.

To keep our technologists (engineers, data scientists and analysts) focused on business logic, two cross-functional squads were created focusing on introducing conventions, reducing boilerplate and making common use-cases work out-of-the-box with little configuration.

These squads started making great progress on “Paving the Road to Production” by introducing:

  • Yeoman Generators for building new apps in our common languages and frameworks
  • Pipelines to provision, build and deploy services to our AWS accounts and Kubernetes clusters
  • Tools to encrypt and manage secrets (check out sopstool)
  • Standardized libraries for logging, metrics and configuration

However, we had a difficult time documenting and communicating the available tools and best-practices as they evolved. Internal blog posts, Slack announcements, Confluence wiki documentation, and in-person training were tried, but they didn’t provide a long-term benefit.

We needed a simple way to let technologists know about opportunities to improve their services based on our standard best-practices.

gamification | noun | gam·i·fi·ca·tion

the process of adding games or gamelike elements to something (such as a task) so as to encourage participation

To encourage adoption of best practices and common toolsets, we gamified the experience. We introduced an Ibotta Scorecard that simply summarized how aligned with our best-practices each service is.

Services leveraging all of the best-practices and tools get a badge that looks like this:

Nice work — you’re aligned with the best-practices!

And services with room for improvement get a badge that looks like this:

It’s possible this service isn’t being actively worked on. This is a great candidate for some maintenance work.

The ability to see a simple letter-grade for all of our services is extremely powerful and allows us to wrap up a plethora of information about a service’s health in one simple badge. Technologists can then dive into a Confluence page that explains the score and learn how to change their project to receive a higher score.

This service is doing a great job, but needs a CODEOWNERS file in their GitHub repository so that Pull Requests can be reviewed by Subject Matter Experts.
The technologist can even get more detailed information about the check, like why it’s important. This codification of importance helps to document and ingrain the Ibotta Team values.

Better yet, we can assign varying points to each check to weight the most important signals for a service’s health. For instance, a failing security check reduces a service’s score much more significantly than a code-style check.

How We Built It

When considering how to build this tool, we wanted to make sure that the tool was:

  • Scalable as we add many more services
  • Extensible so other Subject Matter Experts can codify best practices in the tool

The Pipeline

The Kubernetes-native, event-driven scripting tool Brigade is a great option to tackle the scalability requirement of this project. It leverages our Kubernetes cluster to run our scoring logic and fans-out jobs for scoring each repository, so it scales well as we add more services.

One of Brigade’s many advantages is that it defines the pipeline using a standard Node 8 script. For the scorecard we have three high-level tasks:

const repos = await identifyMicroserviceRepos();
await scoreMicroserviceRepos();
await publishScorecardsToConfluence();

If you’re not familiar with Node, the process is:

  1. Identify all the projects for scoring; then
  2. Score all of those identified projects; then
  3. Upload that info to Confluence

Here’s an overview diagram of how the pipeline works. We’ll discuss each of the steps in detail in the sections below.

A visual representation of our Brigade Pipeline

Triggering the Pipeline

We utilize a Kubernetes Cron Job to trigger our pipeline at a set interval. There’s not much to it, we simply trigger our pipeline via an internally exposed Brigade Gateway.

Identifying Projects

To identify projects to be scored, we introduced a simple plain-text file in each project’s git repository that we identify using GitHub’s Code Search API.

// Each project to be scored has the file at the root of their repomy_project
└── .ibotta
└── SERVICE_IDENTIFIER <- a simple text file
// And we identify it using a scoped code search
// (this example uses the @octokit/rest package for node)
const projects = await githubClient.search.code({
q: 'org:Ibotta path:.ibotta filename:SERVICE_IDENTIFIER'

We automatically create the SERVICE_IDENTIFIER file when you generate your service using our Yeoman Generators, so you automatically get a score for your new service.

Project Scoring

Once we know which projects to score, we utilize a Brigade Group to fan-out and score each of our projects in parallel. This means that each identified repository gets its own pod that runs checks on that single repository.

After the projects are identified, a pod is created for each project to be scored in parallel

Within each pod, there are a set of checks that may or may not execute on each project. Some checks are valid for all projects, such as enabling Travis CI, CodeClimate and protecting your default branch from force-pushes. Other checks are specific to a given language or framework, such as enabling nsp security checks for Node projects or Brakeman for Ruby.

Each of these checks return a standard Score object that has metadata about the check, how many points were possible and how many points the project earned during scoring. These common Score objects are then parsed (using ejs) into a Confluence page containing a badge and all of information the technologist needs. These are written into a plain-text file in Brigade Shared Storage to be uploaded to Confluence after all projects are scored.

The benefit of using this standardized Score interface is that we have an extensible platform for Subject Matter Experts (SMEs) to introduce new scoring logic. We utilize an in-repo Yeoman Generator to assist SMEs in creating new Scoring Methods.

An example session with our Yeoman Generator for Scoring Methods. Note that a source and test file are created. Lots of inline help is generated in the source and test files so SMEs can write their business logic.

Uploading Scores to Confluence

The last step is to utilize the Confluence API to write pages for each of the generated files in Brigade Shared Storage. Since everything’s already formatted correctly, it’s as simple as iterating the files and publishing them to our Confluence wiki.

It’s easy to see that one of our services there needs some attention but, on the whole, we’re doing a good job!


Thus far the scorecard project has proved to be a fun, effective way to emit information about the health of our services at Ibotta as we grow. It rolls up a plethora of information about each project and helps us to identify opportunities for maintenance work after services are in production.

In the month after launching this tool, we’ve seen technologists respond in a really positive way to the scorecard badges on their repositories.

People get excited to see their services score an A+
The badges drive healthy competition

We will continue leveraging the scorecard platform in the future to document and encourage best-practices among our developers. We’re already seeing Pull Requests from other squads adding scoring criteria and the best practices for their areas of focus, which is exactly what we were hoping for.

Perhaps most excitingly, the majority of our services have grades in the “A” letter-grade range! This means that we’re effectively driving the importance of opting into the tools and best-practices of the Ibotta Technology organization with our gamified scorecard.

Additional Resources

If this idea seems interesting for your organization, here are a few resources to take a look at.


Brigade has proven to be an extremely powerful tool for us. I want to thank Matt Butcher and all the other contributors to Brigade for being so willing to collaborate on the project. Drop by on the Kubernetes Slack in the #brigade room to chat with these great folks.

Additionally, check out their marketing page and their docs for more information on Brigade.

Internal Tools / Road Paving Teams

Ibotta’s Matt Reynolds has a great talk called “Paving the Road to Production: Why You Need an Internal Tools Team” that goes into more detail on the mission of our Internal Tools squads.


If these kinds of projects and challenges sound interesting to you, Ibotta is hiring! Check out our jobs page for more information.

Building Ibotta

Thoughts and experiences from Ibotta's engineering…

Building Ibotta

Thoughts and experiences from Ibotta's engineering, analytics and product teams

Ben Limmer

Written by

Geek. Principal Consultant @ https://benlimmer.com

Building Ibotta

Thoughts and experiences from Ibotta's engineering, analytics and product teams