Improving GetKanban Scoreboards for Deeper Learning from the Game

Alexei Zheglov
May 10 · 7 min read

My colleague Dave White and I have won some compliments for this non-boring, creative scoreboard we compiled while playing GetKanban in one of our training workshops.

GetKanban is a popular simulation game, invented by Russell Healy almost ten years ago. It has since become a standard tool in Kanban training workshops, such as the certified Kanban System Design (KMP I). Many people around the world play this game at local Kanban community meetups and use it as a learning tool in their companies.

We didn’t make this scoreboard only for good looks, of course. Each visual element serves some learning purpose. Let’s go through this board and understand this purpose. This may be useful to all future players.

First, Companies. The word in the top-left corner. It is preferable not to refer to the groups playing the game as “teams”, particularly in North American corporate contexts. The four groups here are simulating four companies. Their goal is to gradually discover an operational solution that aligns with a given strategic objective. People who describe themselves as “team members” in North American corporate English often lack operational authority. If you find yourself playing this game during a paid Kanban workshop, ask if the right people are in the room.

Second, the four columns. Dave and I had 26 students in this workshop. We divided them into four groups six or seven people each. Four groups means two co-trainers should be running the workshop. If there is only one trainer, increasing the workload from three games to four increases utilization by a third, which may lead to non-linear worsening of responsiveness to various situations, problems, and questions that inevitably arise during the game. If you do Kanban training professionally and landed such a large gig, get a co-trainer or split the training group. If you’re a buyer of Kanban training, expect and demand standards and quality from your trainers.

Third, the four goals. Each group gets a different goal to achieve during the game. The message is, there is no abstract continuous improvement. Improved capability has to align with the strategy. The players have to figure out the optimal parameters and policies necessary to achieve their goal. Some players will make choices that don’t make sense for other players, because they have different goals. Solutions optimizing for most money and time-to-market are somewhat different. They’re both quite a bit different from the solution optimizing for most predictable time-to-market. Another possible goal, maximize throughput (measured in work items delivered), is somewhat artificial and can lead to dysfunction. The players can realize this at some point during the game and change their goal.

The default goal is maximize the money — the profit of the simulated company. With two concurrent groups, we give them the goals of most money and the fastest time-to-market. With three groups, add the most predictable. And with four, the most throughput.

Fourth, the run chart sketches in the “goal” row. The control limits (red lines), drawn only on the right hand side of the chart, hint that the system doesn’t meet its goal at the start of the game. It should achieve the goal eventually, before the end of the game.

Fifth, there is no score entry for Day 9, when the simulation starts. We bootstrap the GetKanban game by playing Day 9 with all groups together at one table, illustrating various rules immediately with the actual moves on the game board. We prefer not to use a presentation to explain the rules. That takes time and the players end up making more mistakes with the game mechanics, ask more questions about the game mechanics, and take longer to complete the game and learn from it. At the end of Day 9, we call a short coffee break. All other groups copy the position on the game board. Thus, all groups finish Day 9 with the same amount in the bank (usually $200) and there is no need to waste scoreboard space on it. This is why the first scores appear on the scoreboard at the end of the next reporting period, which is Day 12.

Sixth, Day 15 is an important synchronization point. Groups play the game at varying paces. If not controlled, variation can really accumulate by the end of the game. This is unsatisfactory as there is a debrief and additional learning material coming up. The group finishing the game first has to wait unproductively. To avoid this problem, we target the Day 15 finish by lunchtime. This causes faster groups to slow down a bit and slower groups to speed up a bit. People go to lunch a few minutes earlier or later. But they all come back from lunch synchronized. Variation accumulating from Day 15 to Day 21 (the conclusion of the game) is insignificant and all groups finish the game more or less at the same time. I can appreciate how Russell Healy, the creator of GetKanban game, made many incremental improvements to the game over many years to reduce the unproductive variation in playing times.

Seventh, we make one important rule change at the start of the game. The initial replenishment cycle is three days, not daily. The first replenishment occurs at the beginning of Day 10, followed by 13, 16, etc. (Jumping a bit ahead, one of the characters in the game, Margaret, the chief of marketing, realizes eventually this simulated company is in pretty dynamic business, and implements certain changes in her department, enabling daily replenishment starting Day 16.)

This rule change makes an interesting challenge to find the optimal launch date for the fixed-date item F2. Day 10 is too early — high opportunity cost, Day 13 is optimal, Day 16 runs some schedule risk. It also shows how more frequent replenishment creates more scheduling options and leads to greater agility.

This rule change also sends important messages. Replenishing a Kanban system isn’t something that happens only in teams during their morning stand-ups (“I feel like starting on this work item today”). It’s an act of commitment, it adds to the inventory of business commitments, and it requires presence of decision-makers of sufficient seniority, who take responsibility before customers for delivering services, products and projects. The infrequent replenishment at the start of the game simulates a situation common to many companies: such decision-makers don’t meet for this conversation very often.

Eighth, the halftime reflection after Day 15. Day 15 is the natural midpoint of the game. Most playing groups find themselves not on track to meet their goals. I make rough sketches like shown in the Day 15 row on the scoreboard to highlight some gaps and we review these gaps first thing after the break.

For example, the group playing for the fastest time-to-market is actually slowing down. They started with 6–8 days of lead time and it’s now 6–10. The group playing for most predictable lead time became less predictable: their spread is from 2 days to 10. The group playing for the most money is in the third place out of four in the money race, but more importantly their lead time (8 days’ average) is too slow to gain market share. The simulated market feedback (thank you Russell for tweaking the data for version 5 of the game) makes it apparent that the fitness threshold is 3–4 days.

Such gaps aren’t the fault of the players. It’s a natural state. Players get a handle on the game mechanics up to Days 11 or 12 (Russell’s refined language of the event cards helps). They think through and make lots of tactical decisions up to Day 15, for example: going around Carlos’ bottleneck, dealing with dependency on Peter’s shared service, moving dice around (temporarily speeding up some activities at the expense of others), and re-prioritizing work items in queues. But don’t yet think strategy. But they’re getting ready for it. And at the start of Day 16, we give them an important challenge: Margaret is changing the replenishment cycle, how much buffer do you need now? And if they don’t figure it out on Day 16, can the Day 17 replenishment close the feedback loop and lead them to it?

The midpoint reflection is also an opportunity to introduce a Kanban term: service delivery review (SDR). It’s a time and place where we can assess the performance of our service (product pipeline, project) quantitatively and compare it with customer expectations. We don’t elaborate on how to do SDRs as we don’t want to sidetrack the learning goals of the game, simply plant the seeds of future learning.

Ninth, the Day 21 lead time histograms are made from filtered data. The players have developed their new system close to Day 21. The data from earlier days may not be representative of the current system’s performance. I discarded such old data and made the histograms from the rest. Occasionally, I notice players cherry-pick Day 21 deliveries for shorter lead times. I project some deliveries for days 22+ in such cases and add their lead times to the data set to make it more realistic.

We can see from the final histograms that the first group has achieved their goal. Their average lead time is the shortest. But the “tail” of the lead time is also compact, and that’s how they solved the predictability challenge better than the predictability group (column 4 on the scoreboard). The throughput group (column 2) effectively switched to the money goal and did well, too. They pretty much hit the fitness threshold of 4 days. Their unpredictability (a couple of items with lead times of 10+ days) was mild and, as it turned out, inconsequential.

Tenth, this list is far from everything you can learn by playing GetKanban. A proper debrief (45–60 minutes) will uncover much more. I wrote only about several important things that were apparent to me from the scoreboard picture.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade