Changelog for Zombie Scrum Symptoms Checker

Developing awesome products is a process of continuous discovery. In this release log, we’re tracking changes, bugs, mistakes, and insights learned from our work on the Scrum Team Survey.

If you have ideas, bugs, or other ideas, let us know at We’re happy to hear your feedback. Our public Product Backlog is accessible here.

September 21

  • MINOR: We added a nicer account popup to the team dashboard. This popup has all the commonly used functions available in one popup (password reset, change subscription, sign out).
  • MINOR: The team dashboard now shows the badges that the team earned for the most recent snapshot. This was a popular request by many users.
  • MINOR: In the team dashboard, you can now see the progress of participants. This simplifies the process of identifying which participants could be removed (e.g. attempts that were started, but not completed). We show the progress bar until the survey is completed:
  • MINOR: We also show a quick visual summary of the changes between snapshots. The names of the dimensions are visible when you hover over them.

September 15

  • MINOR: The Team Dashboard is now styled similarly to the Scrum Team Survey Report. Over the coming weeks, we will be overhauling individual elements and adding new features based on a design by our designer Wim Wouters.
  • BUG: We fixed a bug that blocked “new participants” emails after the first. The cause of this bug was one of the layers of protection we implemented to prevent duplicate emails from being sent out. This is now resolved.

September 9

  • REFACTOR: We’ve greatly improved the response speed of the Scrum Team Survey. With the substantial growth of our database, we’ve changed how we store and load data from the database.

September 7

  • MAJOR: You can now add more people from your change team (Scrum Masters, team members, coaches, management) to the Team Dashboard. This removes the limitation of having only a single account to access the Team Dashboard. The person holding that account often turns into the “administrator” for the teams, and this is certainly not in line with our principle of giving the teams full autonomy. So it is now possible to share this responsibility. You can add as many accounts as you have teams.
  • MINOR: We now show a message when you’ve reached the maximum number of teams for your subscription, with the option to upgrade. The same goes for when your subscription is about to expire (within 30 days).

August 29

  • BUG: We fixed an issue that caused the menu items in the team report to duplicate.
  • MINOR: We clarified in the survey why it is helpful to still enter an e-mail address, even though it is now fully optional.
  • MINOR: The field for the team name is now at the end of the survey. We noticed that quite a few participants don’t fill in this field right away, but then forget to go back and enter it. We’re experimenting with a version where the name is now at the end.
  • MINOR: We changed the names of the tabs in the survey to correspond to the core factors we also present in the team report.
  • MINOR: We updated the notification that the starter of a snapshot receives for new participants. The URL in the notification now leads to the team report.

August 12

Although I’m on holiday, I still addressed some issues as they were discovered

  • BUG: We discovered that the activation code was no longer usable after the subscription was changed. The issue was fixed.
  • BUG: We fixed the duplicate “How to improve” link that sometimes showed up in the report.

August 12

  • MINOR: Subscribers can now more easily re-invite teams for new snapshots from the Team Dashboard. Multiple snapshots for a single team can be used for the trend analyses we also offer to subscribers.
  • MINOR: E-mail addresses are no longer required in the surveys. This is part of our push to emphasize anonymity. Even though they are no longer required, we still recommend if you 1) intend to invite other members of your team, 2) set a reminder at some point, or 3) want to be able to change your responses at a later date. After all, we need an e-mail address to send you a notification or a link to change a survey you completed.
  • MINOR: We now show explicitly to all members which questions were already answered by other members in a team, and don’t make sense to ask again to everyone. Team Size and Organization Sector are examples of such questions. We also reuse answers across snapshots for the same team. With this update, we also show specifically which questions we re-use and offer the option to change the answer.
  • MINOR: The Team Dashboard now allows teams to remove any participant from a snapshot.

August 6

  • MINOR: When only 2 participants have participated in a team, we now also hide the scores for badges. Badges are visible for 1 participant or for 3 or more. This is to protect in scenarios where only one other person participated.
  • MINOR: It is now possible to share a team report. Many people asked for a report that can be used in Sprint Retrospectives and doesn’t show the personal scores of the participant who is sharing it. We added a “Share” option in the menubar. Team reports become available when at least 3 members participate, so as to protect anonymity.
  • MINOR: Because the number of customers is growing rapidly, we spent this Sprint mostly on automation of administrative tasks. For example, invoices are now imported into our accounting system automatically, and payments are reconciled automatically as well. This doesn’t immediately add value to our users, but it saves us valuable time (and mistakes) that we can now spend elsewhere.

August 1

  • MINOR: We changed the names of various areas to be more consistent with our research. “Ship it Fast” is now “Responsiveness”, “Build What Stakeholders Need” is now “Stakeholder Concern”, “Improve Continuously” is now “Continuous Improvement”, “Self-Organize” is now “Team Autonomy” and “Quality” is “Concern for Quality”. The new names more accurately describe what is measured and align with the naming we use in our scientific publications. This has no impact on the profiles or the scores otherwise.
  • MINOR: We improved the measurement model for the survey we send to stakeholders based on the data from 460 stakeholders we’ve collected to date. We were able to reduce the stakeholder survey to 12 questions (from 17). The areas that we called “Stakeholder Experience: Responsiveness” and “Stakeholder Experience: Engagement” turned out to be mostly the same in the data, so we combined them into “Stakeholder Experience: Responsiveness”.
  • MINOR: We removed the two-question scale “Team Value”. We used this scale to ask teams to rate the value of their own work. Data analysis showed that this scale was heavily biased and not a reliable indicator of actual value delivered to stakeholders. Teams that want to see how much value they are delivering should really ask their stakeholders with the Stakeholder Survey.

July 29

  • MAJOR: We added the ‘Actions’-section to the Scrum Team Survey. Here, teams can keep track of actionable improvements they intend to take to improve the results. We have big plans for this section. We see it as a great way to drive evidence-based improvements and monitor how they actually improve results over time (or not?). However, we’ll first test if teams are actually interested in tracking improvement actions from the Scrum Team Survey. Experiment started!
  • BUG: User account creation failed periodically. This issue happened every 24 hours and immediately pointed to expiring API tokens for Auth0. This baffled us for a while as our platform automatically refreshes tokens. But it turned out that the code responsible for user creation did not use new tokens even when available.
  • MINOR: From now on, we will no longer add version tags. This made sense when our ecosystem primarily consisted of one core service. But now that our ecosystem is spread out over a dozen services — each with its own version — the use of version numbers in this changelog has become less meaningful.

1.0.422-production July 19

  • MINOR: When you complete the survey, you are now automatically redirected to the results. Earlier, we e-mailed the link to the profile separately after completion. We implemented this flow because the generation of a profile (calculations, feedback collection) takes some time. However, e-mails are sometimes bounced by spam filters or corporate mail servers that decline external links altogether. When you complete the survey now, you are redirected to a “please wait” page that shows you the profile when generation is done.
  • MINOR: We added more regions to the question about cultural background.
  • MINOR: Downloadable resources in the feedback can now be downloaded directly. We removed Shopify from the process. Subscribers download all digital downloads for free. Non-subscribers can still access all downloads as well, but the priced downloads (like some of our DIY workshops) point to the Shopify page where you can buy and download it.
  • MINOR: We removed the open questions for country and city. We replaced these questions with a new closed question that only inquires about the region — this is sufficient for our research purposes.

1.0.419-production July 15

The new team profile
  • MAJOR: Today, we launched the new profile for teams. We listened to your feedback and made it simpler, more actionable, and slicker. Wim Wouters from did a fantastic job. At least, we think so. We’re happy to hear whatever feedback you have for us. This new profile, and the associated style, will be applied to other parts of the Scrum Team Survey soon. The new profile also provides a preview of features we’re working on, such as “alerts” and a way to track improvement actions. All teams — including those that participated before — can now view their results in the new profile.
  • MINOR: We’ve lowered the bar for teams to earn certain badges.

1.0.416-production-June 30, 2021

  • BUG: We discovered that in some cases there is no e-mail set to notify when new people participate in a team. Normally, this should be the e-mail address of the first participant for that team. We patched missing values and fixed the bug that caused this.
  • MAJOR: We implemented secure and more flexible user accounts for the Team Portal. Instead of rolling our own solution, we opted to implement Auth0 to offer a secure and well-tested login process. Auth0 also makes it easier to implement single-sign-on in the future and multiple users for larger organizations.

1.0.414-production-June 16, 2021

  • REFACTOR: We improved the way in which data is stored to make the survey remarkably faster and more efficient. Previously, we stored all data about participants, snapshots, and teams in a single table. That worked well for a while, but the size of the database meant that queries started taking longer and longer. The structure in the database now follows the domain more clearly, which also reduces confusion and mistakes.
  • MINOR: We’ve changed the styling of the Liberators Portal and the Team Dashboard to be consistent with the new homepage. The mobile experience is also better (though not perfect yet)
  • ISSUE: We are investigating a potential memory leak in our API.

1.0.408-production-June 12, 2021

  • MAJOR: We launched a new homepage for the Scrum Team Survey. Our new website more clearly explains what the survey is and how it works.
  • MAJOR: We implemented Stripe to offer a more user-friendly way to let people buy a subscription. The previous process relied on our webshop and was — admittedly — clumsy and manual. It did allow us to test whether or not there was sufficient appetite for a subscriber model though.

1.0.398-production-May 18, 2021

  • MINOR: We added a three-item scale for “Product Discovery”. This scale taps into the ability of teams to proactively discover the needs of stakeholders. It falls under the theme “Build What Stakeholders Need”.
  • MINOR: We added a few more questions to existing scales to improve their ability to measure the right variables. The scale for Sprint Review Quality now includes questions about the usefulness of this event (thanks to a suggestion by Dave West from The scale for Quality now also connects more strongly to the “Definition of Done”.
  • MINOR: We improved the descriptions for the core factors (e.g. Ship It Fast, Improve Continuously). Aside from explaining why they are important, we now also explain what we measure to determine their scores.
  • MINOR: At the end of the survey, it is now required to accept the terms of use and the privacy statement. Thankfully, both are very short.
  • MINOR: We removed the ‘Short’ version of the survey. Over time, the short and long versions moved closer to each other in terms of the number of questions. We also observed that few teams actually used the ‘Short’ version in the first place. And finally, the “Short” survey doesn’t include questions about the valuable outcomes, which is one of the most powerful sources we have to give teams feedback on.
  • MINOR: When you’re starting a new survey, the platform now checks if you’re part of the team or not. If not, the survey gives you advice on how to invite teams to do it themselves. Over time, we’ve noticed that people outside teams (e.g. coaches, management) start surveys for teams. While this isn’t necessarily an issue, we really want Scrum teams to stay in control over their own continuous improvement. Even when it is done for the best of intentions, starting a survey for another group of people takes away some of their ownership.

1.0.387-production-May 14, 2021

  • MAJOR: Sometimes, we need to make a big change that has zero impact on your use of the Scrum Team Survey. We refactored our codebase this week to reflect the more hierarchical structure of data; from individual participants to snapshots, snapshots to teams, and teams to organizations. Although we used a temporary solution to virtually restructure the data to make it seem hierarchical, the solution was both confusing development and causing issues we wanted to avoid.

1.0.380-production-May 5, 2021

  • MAJOR: We migrated the survey and associated sites from to URL’s still pointing to the old domain will continue to work. This change is in response to feedback from several Scrum teams that they worry that “Zombie Scrum” will be off-putting to their members or stakeholders.

1.0.370-production-April 30, 2021

  • MINOR: Based on popular request, we’ve re-added the markers for the average score of a team, even when you’ve toggled the option to see the range of scores:
  • MINOR: Subscribers now receive an email when their subscription expires 7 days from now. This is a good opportunity to renew a subscription. We’ll automate this process eventually, with automated renewals, when there are enough subscribers to warrant the investments for this (which are quite high).
  • BUG: The Liberator Portal used to show snapshots with 0 participants every now and then. This was resolved.
  • BUG: We resolved an issue where some subscribers couldn’t log in to the dashboard since yesterday. It turned out that one of the data-migrations didn’t update one of the columns correctly (it did locally, not on production). Fortunately, we were able to resolve it quickly.

1.0.367-production-April 29, 2021

  • MAJOR: We launched our subscription model this week. As a result, we changed the language from “boost codes” to “subscriptions” and “activation codes” in the interfaces. You can continue using the boost codes if you purchased those before the switch. All boost codes have been automatically upgraded to the Liberator-tier with a limit of 5 teams.
  • MINOR: Following the change from boost codes to subscriptions, we added logic to our backend to track subscriptions.
  • MINOR: We added a status page to reflect the maturity of the infrastructure and the platform. We did discover that we didn’t have health checks yet for the survey itself and for the portal, so those were activated too.

1.0.338-production-April 21, 2021

  • MAJOR: The portal for Liberators now allows teams to track how their scores change over time, provided at least that they also participate in the survey every now and then. We implemented the first version of this feature (in 3 days), and will iterate over it and expand it further in coming Sprints — also based on your feedback. A boost code from our webshop is required to unlock this feature.
  • MINOR: From the Liberator Portal, it is now easier to retake a survey with your team and effectively add a new “snapshot”;
  • MINOR: When you retake the survey with your team, and you already have a boost code for your team, this code is now automatically applied to the new snapshot as well (this new snapshot used to be boosted as well).
  • MINOR: Added a support page to Liberators Portal to make it easier for users to reach us with issues (and a great way for us to learn about errors and inconveniences).
  • TECHNICAL: The Liberators Portal now has built-in “usage counting” to allow us to learn more about how you use our application. We don’t use third-party software for this. The only thing we count is how often a button is clicked (1 times, 2 times, 3 times, etc) by any user across the application — so its not tied to any person, session or login.
  • MINOR: The process of retaking a survey is now better supported by the Scrum Team Survey. Instead of the regular start page for a new survey, we now offer a special start page for teams that are retaking the survey.

1.0.338-production-April 19, 2021

  • BUG: The portal now displays the correct number of participants per snapshot. It used to included participants that didn’t provide any answers, and would’ve been cleaned after 14 days anyways.
  • BUG: The portal now displays the correct number of snapshots per team. The earlier version included snapshots from un-boosted teams in the calculation. This is a rare condition, but it was an easy fix :)
  • REFACTOR: The codebase for the Liberators Portal now performs all its integration tests (API and UI) on the Alpine-based Docker image that is built as part of the CI/CD flow, which brings the tests as closely as possible to the production environment.
  • REFACTOR: We’ve started refactoring the codebase to an emerging domain-driven model where data is a hierarchy of: organizations > teams > snapshots > participants. Before, we abstracted everything into “Responses”, but that is increasingly causing confusion and potential for bugs. Because refactoring is done with a Strangler-like pattern, we can continue deploying without issue.

1.0.334-production-April 9, 2021

  • MAJOR: Today, we released a much-requested feature to “manage” your teams from a single dashboard. We’ve dubbed the Liberators Portal. It is only accessible for licensed teams that have been boosted with a boost code from our webshop. Your dashboard shows all teams that you used the same boost code for, and allows you to start new snapshots (surveys) for a team, to add new teams or to remove responses from surveys (e.g. because they were tests or mistaken). In the coming weeks, we will expand this portal with more features, as well as general improvements to the user experience.
  • BUG: Thanks to a report from a Scrum Master, we discovered that one of the questions in a new version of the “Release Automation” was accidentally reverse-scored. We corrected this issue and regenerated all reports.

1.0.329-production-April 1, 2021

  • MINOR: Whenever you boost a team, every member in that team now receives a nice email to explain the various benefits and premium features that are now enabled for them.

1.0.328-production-March 30, 2021

  • MINOR: We’ve added a simple feedback form to collect feedback in a more structured format. The form also includes upcoming ideas for features.
  • MINOR: We noticed that some design elements started going in different directions, and we harmonized those again for consistency.

1.0.317-production-March 30, 2021

  • MINOR: In an effort to reduce the “drop-off” rate for participants, we’ve analyzed at what points in the survey this usually happens. We changed the order of the tabs and removed some unnecessary questions.

1.0.315-production-March 29, 2021

  • MINOR: It is now possible to skip questions in the survey if they are not relevant to your situation. Although this was technically already possible by simply not providing an answer, this wasn’t clear in the interface. So we added an explicit “doesn’t apply” option to questions where it is relevant.
  • MINOR: Streamlined the messages that sometimes appear in the profile to indicate that few people participated, or that you need to invite stakeholders.
  • MINOR: We added a progress bar to the survey, which means it easier to see how much you have left to enter.
  • BUG: On the homepage, the legend incorrectly showed the wrong color for teams (pink), where it should’ve been blue.
  • BUG: It is now possible again to resume a survey and see your earlier responses. Thanks to a bug report, we discovered that the reminder email sent people to a fresh survey.

1.0.314-production-March 28, 2021

  • MINOR: In our analyses, we noticed that some participants are inclined to give overly optimistic responses to some questions. This may lead to biased and inflated results for some teams, and may take away a valuable opportunity for reflection. So we implemented a mechanism to reduce this bias in the reporting. For this, we used three items from the SDRS5 scale for social responsibility to calculate regression coefficients for all the individual items with a structural equation model in AMOS. We then use these coefficients to “partial out” the part of a score on each individual item that is linked to the score on social desirability for a respondent. We apply this correction only for participants who score one standard deviation above the population average for social responsibility, and the correction strengthens the larger the deviation is (to a maximum of -1.35 points).
In this example, you can see how — after the statistical correction — the scores of one participant who scores very high on social desirability (blue) now lie closer to the team average. We could amplify the correction to bring the blue scores even closer, but we also don’t want to assume that this person is very biased or just very positive.

1.0.310-production-March 25, 2021

  • MAJOR: Teams can now specify the “benchmark” (for lack of a better word) they’d like to be compared with. Normally, teams are compared with a representative sample of other teams. But we’ve now added the option to select more specific ones, like “experienced teams”, “teams that ship fast” and “teams that deliver a lot of value”. This option is available for teams that have a boost code.

1.0.309-production-March 23, 2021

  • MAJOR: Teams can now toggle between the range of scores in their team or the average scores (see below). This is a popular request, as it allows teams to see where they (dis)agree the most. This feature is available — with other benefits — when you purchase a boost code for your team for 1, 6 or 12 months (at €10/month) in our webshop. We hope that features like these can generate some revenue to fund further development of the survey.
  • REFACTOR: Because we’re adding new features, and thus more complexity, we also removed some statistical calculations from the backend that we’re not using anyways.

1.0.297-production-March 18, 2021

  • MINOR: We’ve began testing a model that might help us generate some revenue to fund further development of this platform. Teams can now purchase a “Boost Code” to gain free access to otherwise paid content (like some of the DIY workshops that are recommended in the profile) and other features (still in development). Aside from the benefits for teams, this is also a great way to fund the further development of this survey.

1.0.287-production-March 12, 2021

  • REFACTOR: In order to optimize for performance, we used to store scores for respondents, samples and the entire population in the same MySQL database as the survey. However, as the number of teams increased, this made the database swell to hundreds of megabytes, slowed down performance and made it difficult to quickly backup. We moved this cache to a separate Redis store. Added benefit is that this also fixed a few performance-related bugs that sometimes caused a team to report one fewer respondent than actually participated.
  • MINOR: We’ve changed the range for what constitutes an “average” score to the 15–85% percentile. It was 25%-75%. Now that we have a better sense of the spread scores in the population of Scrum teams, we feel that our initial range too easily resulted in either too positive, or too negative, feedback. We’ll continue monitoring the range.

1.0.284-production-March 8, 2021

  • MAJOR: We simplified the profile by combining the tabs for “Recommendations” and “Our feedback” in the profile. Now, our feedback and improvements are bundled for each topic, which is far more user friendly.
  • MINOR: An observant reader noticed that the feedback under “How to Improve” for “Ship It Fast” and “Build What Stakeholders Need” was the same. We fixed this by adding the correct copy.
  • BUG: If you didn’t invite any stakeholders, the profile would still give you feedback based on the topics we measure for stakeholders. But with a score of 0 for those scales, this was rather pointless.
  • MINOR: Under “How to improve”, the resources that we now offer are all based on actionable content (mostly DIY workshops).
  • MINOR: Because the survey now recommends some paid Do-It-Yourself Workshops, we made it clear which content is free and for which we ask a small price.

1.0.278-production-January 24, 2021

  • MINOR: We improved the error handling in the survey. In some rare situations, people got stuck when they didn’t enter a country and skipped all the way to the end of the survey. The validation message was only shown on the first page. Now it is shown on the final page too.
  • MINOR: We improved the profile by now showing an informative icon when fewer than 3 stakeholders participate. In this case, no averages are shown for stakeholders. This was bit confusing for some, so we added an extra icon and message to explain why.
  • MINOR: Based on feedback, we added the invitation link for stakeholders to the emails as well.

1.0.274-production-January 20, 2020

  • MINOR: We added more feedback and recommendations where possible. We also adjusted two badges to include scores from stakeholders. First, there is “Unleash The Stakeholders”, which is acquired when many stakeholders participate in your team. Then there’s “Customer Love”, which you receive when stakeholders are happy with your work— lets see if you can get them :)
  • MINOR: A loading icon is now shown in the profile when it is still being generated.

1.0.273-production-January 19, 2020

  • MINOR: In your profile, you can now hide your personal scores. This was a recurring request from people who’d like to share the profile with their team on a shared screen without disclosing their personal answers.
  • MINOR: The topic “Sprint Goals” is now under “Build What Stakeholders Need”, where it also loads the strongest in our statistical model.
  • BUG: We discovered that “Psychological Safety” and “Cross-functionality” were accidentally hidden in the profile. Both are visible again.
  • MINOR: We frequently regenerate all profiles based on new insights and an updated measurement model. In some cases, we include new topics that were not measured before. So we now add a message to your profile when we regenerate it and when it is older than 6 months to warn you that you may not be fully benefitting from updates to our model.
  • MINOR: The homepage now shows the scores of stakeholders and teams. At the moment of writing, there are no stakeholders in the database yet. But we hope this will soon change.

1.0.268-production-January 18, 2020

  • MAJOR: Today marks the release of a feature that has long been on our minds. In an effort to start more powerful conversations about things that matter, teams can now invite stakeholders to offer their perspective on what the team generates for them. This is a great way to validate the perspective from the team against those of actual stakeholders, like users and customers. Stakeholders participate with a shorter survey, and the results are compiled into the team’s profile (provided at least 2 stakeholders participate to protect anonymity). Because we want Scrum Teams to remain in control over who receives a profile, stakeholders do not receive a profile after completion. It is up to teams to decide what they want to share.
  • MINOR: In your profile, you can now see how many people entered answers for each topic (people might skip some). Note that we only show the aggregated scores of others if more than 2 people participated.
  • MINOR: Based on usage metrics, we’ve optimized the profile. Results are now shown first. Then, the team and stakeholders can be invited.
  • MINOR: Several topics have been moved to other domains based on their factorial loadings from our ongoing psychometric validation. For example, “Quality” was moved from the domain “Ship It Fast” to “Continuous Improvement”. The reason for this is that Quality correlates much stronger with topics under “Continuous Improvement” than with topics under “Ship It Fast”. The net result of this change is that the average scores for the various domains are now even more accurate.

1.0.239-production-January 6, 2020

  • MAJOR: Based on the data we’ve collected to date, we’ve re-analyzed and improved the survey to make it more reliable, shorter and more accurate. We were able to remove 43% of the questions without affecting reliability. The fit of our measurement model has improved significantly, and well beyond required thresholds (CFI=.967,Rmsea=0.03,GFI=.932). More importantly, we were now able to test our proposed four-factor model which seems to fit the data really well (and better than simpler or more complex models) — although more thorough analyses are necessary to validate these preliminary findings.

1.0.227-production-December 16, 2020

  • IMPROVEMENT: We improved the user experience of the profile by hiding additional resources behind a click. Some users found the sheer number of helpful resources overwhelming.
  • IMPROVEMENT: We implemented a very simple usage counter to count how many times certain features are used. This allows us to A/B-test new features and improvements. We don’t store any information about the visitor who used the action (like IP, user-agent).
  • TECHNICAL: We moved all logic related to content (blogposts, podcasts) and their recommendations to a separate microservice that is easier to deploy, easier to test and simpler to modify. This is part of an on-going process to refactor a somewhat monolithic back-end API into smaller services that communicate through a message queue.

1.0.205-production-December 9, 2020

  • IMPROVEMENT: We updated the homepage to make it more accessible to Scrum Teams from all sorts of organizations. From feedback, we learned that the “Zombie Scrum”-metaphor can be misunderstood or make people wonder about how objective the survey itself is (it is). Because we feel more strongly about helping Scrum Teams improve than the merits of the metaphor itself, we decided to de-emphasize “Zombie Scrum”.

1.0.201-production-December 4, 2020

  • IMPROVEMENT: We vastly increased the relevance and usefulness of the resources that are shown in your profile. Instead of a manual selection — which always become stale quickly — we now draw from our growing catalogue of content.

1.0.194-production — November 26, 2020

  • MAJOR: In the profile for teams, we split the recommendations we make across the “So What” and “Now What”-tabs. We hope this makes it easier to purposefully first make sense of the results, and then consider actionable improvements. While working on this, we also updated the texts to match the Scrum Guide 2020 (for the profile) and added new resources.
  • BUG: Fixed a minor incorrect link for the various user groups we mention under ‘Find Help’. We want to make sure you find the right group :)

1.0.188-production — November 25, 2020

  • MAJOR: With the Zombie Scrum Survey, we want to support teams all over the world in their continuous improvement loop. In this intermediate release, we took the first step towards this by greatly extending the profile you receive with additional recommendations and suggestions for how to make sense of the results with your team, identify next steps and evaluate your progress.
  • IMPROVEMENT: We added some of the artwork that Thea Schukken created for our book to the website.
  • BUG: Fixed a bug that caused the scale markers to sometimes pop over the dialog boxes.

1.0.182-production— November 24, 2020

  • MINOR: In your profile, you can now set a reminder to retake the survey in the future. You can select how far into the future you want to receive this reminder. This is a great way to include the survey in your continuous improvement loop and determine to what extend your adaptations actually worked. At some point in the near future, we want to implement a feature to actually compare your scores over time. For now, you can do this manually by printing the profile for each take you do with your team.
  • IMPROVEMENT: We renamed the ‘master’-branch to ‘production’.

1.0.170-master — November 13, 2020

  • MINOR: Since we have published our book now, we updated the homepage to reflect this.
  • IMPROVEMENT: We added significantly more feedback to the profiles, including more links to relevant materials. We also included references to specific experiments from our book, so that the survey now helps you to find the experiments that are likely to help you.

1.0.144-master —June 2, 2020

  • MAJOR: We significantly overhauled the survey to improve the quality based on the data we’ve collected to date. This included the removal of items that (statistically) didn’t seem to matter to the bigger picture. We also included new items and scales that make sense from scientific literature. We have retroactively updated all previously generated profiles. Because profiles generated before the date of release don’t include the new items, some scales are missing in old reports. You can simply do a new run to get a complete profile based on the new version.
  • IMPROVEMENT: We’ve added more new feedback rules that align with the findings of our research. So the new rules are less based on our opinion and experience, and more on the data we’ve collected to date.
  • IMPROVEMENT: In what was a mix of a bug and an improvement, we made it easier for teams to get badges. In the previous iteration, a badge would only be awarded to a team if every participant got that badge. But that put the threshold way to high. So now, the majority of participants has to achieve a badge in order to get one as a team. We’ve updated all profiles.
  • IMPROVEMENT: We’ve added the option to donate to some of the e-mails and when completing a survey. Developing this free tool costs us about 30 euro a month for hosting and roughly 1.500 euro a month to maintain and develop. So we hope you’re willing to support us.
  • IMPROVEMENT: Uncompleted surveys are now removed if they remain untouched for 14 days after the last change. This saves storage.
  • TECHNICAL: We’ve optimized performance of the application in various locations, mostly due to the large dataset that we now have.
  • TECHNICAL: We’ve upgrade to .NET Core 3.0 and added health checks to the site and the services behind it.
  • TECHNICAL: We migrated e-mail templates to SendGrid.
  • TECHNICAL: Automated integration tests are now run on AppVeyor instead of through a complicated two-step process on Octopus Deploy and AppVeyor. The new process is much safer, more reliable and faster.

1.0.125-master — May 19, 2020

  • IMPROVEMENT: We added a question at the start of the survey to check how a respondent will be participating. We’ve noticed that many give the survey a try with fake data first, before participating again with real data. When we know if this is the case, we can filter out these responses when doing analyses or when calculating population averages.

1.0.123-master — January 17, 2019

  • IMPROVEMENT: In order to further protect privacy or participants, we removed Google Analytics from the website, as well as any third-party code that sets cookies or tracks users. Our website now sets one functional cookie when starting a survey, and that is only done to allow you to continue the survey later should you close your browser;

1.0.121-master — September 30, 2019

  • BUG: A person who started a survey for a team discovered that notifications about new participants were not sent to him, but to the new participants instead. We fixed this bug;
  • DISCOVERY: A new — and in hindsight obvious — feature was discovered as one user noticed how it would be awesome if the team report would only show the scores of the team and the total population, not also their own individual score. When discussing the results with the team, this keeps the focus on the scores of the team and not on the results of the person who printed his profile for the team;

1.0.119 — September 27, 2019

  • BUG: In your profile, and when you have more people from your team participating, the results in the breakdown would sometimes show the wrong scores for the team (essentially repeating the same set of 5 scores);
  • BUG: The profile would even show dimensions that you (or other members) did not answer any questions for, like ‘Valuable Outcomes’ which is only measured in the extended survey. Dimensions without any scores for you and your team are now hidden;
  • BUG: The profile would fail to show any results for users who did not enter any responses on the survey themselves, but did invite a team to do so. We fixed this bug so that the profile now shows team scores;
  • DISCOVERY: We discovered a use case where users start surveys for teams without filling in the initial survey. To support this use case, we would have to implement a feature where people can start surveys and invite people without actually going through the survey. Although this use case is understandable, we are hesitant to support it as we want to encourage teams to use this survey themselves, not because others want them to;
  • IMPROVEMENT: Scores for teams are now based on medians instead of averages. Although averages are fine for larger groups, medians are more stable in smaller groups and/or with extreme scores. Averages are more sensitive for individuals with extreme scores. All existing profiles have been updated;
  • IMPROVEMENT: We now show team scores when 3 or more people have participated. The initial threshold of 5 was a good start, but too high. We feel that anonymity is still protected when scores are only shown for 3 or more participants;
  • TECHNICAL: We use a message queue for notifying services when new surveys are completed. One of these services accepted messages even though it failed to process them correctly. That service now throws an exception, leaving the message in the queue for later pickup;

The Liberators

The Liberators: Unleash The Superpowers Of Your Team


The Liberators

The Liberators: Unleash The Superpowers Of Your Team

Christiaan Verwijs

Written by

I liberate teams & organizations from de-humanizing, ineffective ways of organizing work. Passionate developer, organizational psychologist, and Scrum Master.

The Liberators

The Liberators: Unleash The Superpowers Of Your Team