Release Log for The Scrum Team Survey
Developing awesome products is a process of continuous discovery. In this release log, we track changes, bugs, mistakes, and insights learned from our work on the Scrum Team Survey.
Next week signals the start of our summer break. This is where we record new podcasts, write blog posts and create helpful content around the Scrum Team Survey. And somewhere, Barry and I will also be taking our holidays with our families. The development efforts on the Scrum Team Survey will be minimal during that time.
We’ll be on a summer break until August. Development efforts will be minimal until then. Bugs and other issues will of course be addressed. Our summer break allows us to record podcasts, write blogposts and create other content around the Scrum Team Survey. See you back after summer!
July 1: More Getting Started Guides
Last week, we added a helpful “Getting Started” guide to the Team Dashboard. This week we continued our effort to make it easier to get started. We added four more guides to the Team Dashboard:
- How to understand our model for effective teams
- How to use evidence-based feedback
- How to keep track of actions
- How to group teams
Each guide is accompanied by helpful screenshots, tips, and a step-by-step approach to making it possible. We also used this Sprint to improve the user experience of starting and completing a guide.
Next week signals the start of our summer break. This is where we record new podcasts, write blog posts and create helpful content around the Scrum Team Survey. And somewhere, Barry and I will also be taking our holidays with our families. The development efforts on the Scrum Team Survey will be minimal during that time.
June 22: Getting Started Guides for easy onboarding
Our users often tell us how much they love all the features that are offered in the Team Dashboard. At the same time, this can also be overwhelming to new users. So this Sprint we’re adding a Getting Started Guides to simplify the process of setting up the Team Dashboard.
Upon first entry, you will now be greeted with a Getting Started Guide that offers a few concrete steps to discover and set up your dashboard. Each step provides details on where to click and is accompanied by an animated use case. You can close the guide at any time and return to it by clicking “How To Guides” in the header. Or just reload the page. However, if you’re not interested in the guide you can also click “don’t show again” to hide it completely. You can always re-activate the guide later.
We also made some other changes this week:
- Based on feedback from Applaudo, we modified the export of the team list to also include the team type, a running snapshot counter, and the factor type (core or sub).
- User Russell suggested a different language for “Snapshots”. We received this feedback from others also. So we changed it to “Survey” now where it makes sense.
Now that we have this infrastructure, we will be adding more guides to cover more advanced features in the next Sprint.
June 18: An improved subscription flow, and the possibility to pay with SEPA debits
This Sprint we implemented a new design for our subscription process. Although the initial flow worked, it was limited in payment options (only credit card) and didn’t fit the overall style of our app. So this week we launched a new subscription flow. We made the following improvements:
- You can now pay with a credit card or with a SEPA debit. For the latter, you need to provide a valid IBAN (European bank account). SEPA Debits are often easier for European-based customers.
- The new design offers a much clearer breakdown of the costs, including taxes and discounts.
- We more clearly explain how the subscription flow is secure, and what we don’t do with your data.
We will be revisiting the subscription flow in a few Sprints from now. Our aim is to allow users to transition straight to the Team Dashboard after their subscription is created. But before we start work on that, we first want to improve some of the onboarding that happens as users first access the Team Dashboard — this will be the focus of our next Sprint.
June 10: Switch to a yearly subscription to reduce invoices & simplified subscription management
We are using this and coming Sprints to improve the process of subscribing to the Scrum Team Survey and getting started with it, particularly in large organizations. Although much of this Sprint was spent on the conceptual design of a new subscription and onboarding process, we also delivered two frequently-requested improvements in this area:
- New subscribers can now choose an annual subscription instead of a monthly subscription. This effectively reduces the number of invoices that need to be booked and accounted for from 12 to 1, and the credit card charges from 12 to 1. If you are already subscribed, you can also upgrade to the annual model directly from the Team Dashboard now (see below). Although you can always flexibly add more teams, the annual subscription is non-refundable. So pick it if you’re sure you’ll be using it for at least a year.
- It is now possible to manage your subscription directly from the team dashboard. Before, you had to go through a somewhat unwieldy process of requesting a secure link which we then emailed to the billing email address. Although this process still exists, users who have “admin” rights in the Team Dashboard can now also access it directly.
- We added a friendlier message when you try to log in to the Team Dashboard with an expired subscription, along with the option to renew your subscription.
- We no longer send any email that your invoice for a new period is ready. Stripe also sends these, and we didn’t want to duplicate the emails.
- We now longer automatically cancel subscriptions where the payment failed consecutively for some reason. When payments fail we always reach out to resolve the issue.
In the coming weeks, we’ll implement a new design for the subscription process. Our designer created these mockups this week with us:
We are also going to add a number of features to make it simpler for customers to get started with the Scrum Team Survey, particularly if you have many teams. This includes:
- More guidance on how to get started with the tool, with clear pointers and helpful hints, but also with videos and a Wiki.
- More guidance on how to use the Scrum Team Survey with many teams.
- A bug tracker and wiki to collect both feedback and share experiences and knowledge of us and participating teams.
- A mail flow to support new subscribers by providing them with helpful hints and tips at the right time (and not all at once).
Let us know if you have more ideas or feedback. We’d love to hear it!
June 3: Export team results & performance boost
This week, we added a button to download (most of) the data we present in the team list as a .csv file. You can import this file into your own analytical software (like PowerBI) or go nuts with Excel.
The export contains the following columns:
- Team name: The team of the team that data in the row belongs to.
- Snapshot: The date of the snapshot of this team. If a team has more snapshots, this column shows different dates. The remaining columns contain data for this snapshot of this team.
- Factor: The name of the factor the results are for. These correspond to the core factors we also show in the results, like “Continuous Improvement” and “Responsiveness”.
- Score: The score for this factor for the snapshot of the team. Scores are normalized on a scale from 0 to 100.
- Delta: The change in the score for this factor since the previous snapshot for this team. If no previous snapshot for the team exists, the result is 0.
- Participants: The number of people that completed or started their participation for this snapshot. This value is the same for all rows of a snapshot.
- Participant Rate: The progress of the number of participants for the snapshot compared to the “limit” that is set for the snapshot. If no “limit” was set, we can’t calculate the progress and thus show a text warning. This value is the same for all rows of a snapshot.
- Actions (Done, Open, Impeded, Canceled): The number of actions, broken down by status, for this team at the time of export. We only show these values for the most recent snapshot of a team.
Performance boost for large environments
We also used this week to improve the performance of the Team Dashboard for larger customers (20+ teams). The initial load time of the dashboard often came in at 12–15 seconds. This is always a nice opportunity to take a sip of coffee, but it's also a bit annoying if you’re visiting the dashboard frequently. So we streamlined the code and brought the load time down by almost 200%. If it took 15 seconds before, it now takes 5 seconds.
May 27: More Insights Into “Actions” and Participation Rate
Our goal for this Sprint is to address several high-value feature requests for one of our most loyal customers, Applaudo Studios. They’ve been successfully deploying the survey with a large number of teams, and their feedback has been helpful. In particular, they are helping us understand what features are particularly useful to coordinate the survey across many teams. So this week we added:
Actions can now be marked as “canceled”
It is now possible to mark “Actions” as canceled or obsolete. This addresses scenarios where improvement actions that were identified by teams are no longer relevant due to changes that happened in the meantime. Although it was already possible to remove such improvement actions altogether, this also means you lose track of them. So we added the option to mark actions as “canceled” so that you can still keep track of these actions. While we were at it, we also made some small improvements to how actions can be managed.
Track participation rate & actioning from the overview
We also added more actionable insights to the overview in the Team Dashboard. It now shows both the number of open and completed actions. We also added a widget with the participation rate of teams. This widget is useful to track which teams are participating, and where a reminder or a nudge might be useful.
The participation rate in this widget is a weighted average of the participation rates of the most recent snapshot for each team. Please note that we only calculate the participation rate for snapshots where the total number of expected participants (the “limit”) is known to us.
Track action status straight from the Team Dashboard
We also added more details about the actions that teams are undertaking to the Team Dashboard. For each team, we now show the number of actions identified by them as well as a grouping by status (open, completed, impeded, or canceled). Teams with at least 1 impeded action are now marked with the flame icon so that help can be provided to them.
- We resolved a bug that broke the “filter by status” dropdown under “Actions”.
- We refactored the code that generates the overview of teams, snapshots, and participants in the Team Dashboard. The code is now much faster.
- Next Sprint, we will add the option to export most of the data we present under “Teams” in the Team Dashboard as a CSV. This allows for more customized analyses.
- We’ll be using June primarily to redesign and improve the onboarding and subscription process. This is in response to a growing number of users, support requests and subscription-related inquiries (such as annual and enterprise licenses).
May 20: Many small improvements
We used this Sprint to address several small, but mostly unrelated, improvements that you suggested, along with a few non-critical bugs. Here are the changes we made:
- We made it simpler to re-take the questionnaire with your team at a later moment, and see the improvements. The navigation bar for the Team Report now features an item called “Re-take”. Here, you can either set a reminder or re-take the questionnaire right away.
- The “Tips” in the Team Dashboard and the Team Report now use better language for tips that we expect to have a small impact on team effectiveness. After consultation with users, we clarified that these tips have a small impact when nothing else is improved. Thanks to Tim for bringing this up!
- Several people asked us to add an “External URL” field to the “Actions” that are created from the Team Report. This allows teams to connect actions to administration in other tools, like JIRA or Trello. This is now possible.
- The Team Report no longer crashes when there are no participants yet. This was only possible directly from the Team Dashboard, but good to fix nonetheless. Thanks to the teams from Applaudo for helping with this!
- Some of our regular training and workshops accidentally showed up under “Community” now and then. We removed those altogether.
- We fixed several bugs in the process whereby people can be invited to snapshots from the Team Dashboard. The first bug caused the entire list of invitees to disappear when you deleted one. The other bug was that we already showed “invited” next to invitees in the pending list of invitations.
- It is now possible to set (or change) the email address for a team in the Team Dashboard. We send notifications to this address in case of new participants or reminders. A consequence of this is that there is no longer a need to provide an e-mail address for each new snapshot for the same team. We now just use the e-mail address that is set at the team level.
- We squashed a small bug in the model visualization in the team report. It incorrectly warned that there was no snapshot to compare the results with. This message now no longer shows.
- We added more guidance on what to do with participants that respond far too fast to be realistic. Most importantly, if you notice this for more people in your team it might indicate low trust or safety, and you should discuss openly with the team whether they want to use the survey at all.
- When a team is activated with an activation code, we now send the email with the code to two e-mail addresses: the email address associated with the subscription and the email address of the team.
- We upgraded our server environment. Performance is much better!
May 13: Make Continuous Improvement More Tangible And Fun With Streaks And Badges
We created the Scrum Team Survey to drive continuous improvement. As part of this, we added a badge system to our platform early on that gives teams “badges” for outstanding achievements. With these light gamification elements, we want to help teams set tangible goals and inspire moments for celebration in an otherwise complex and messy environment.
However, you gave us a lot of useful feedback. The most important point (e.g. by David Hoodspith) was that the original badge system did not tell teams what they needed to do to achieve a badge. So getting one was more of a surprise than an intentional goal. Second, our badges did not reward improvements over time even though that is arguably the most important outcome of using the Scrum Team Survey.
So we redesigned our badge system this and previous Sprint. This new system replaces any existing badges. We checked with a sample of paying customers to make sure this was okay, and nobody objected to it.
Here are the highlights of our new badge system:
- Teams now earn streaks when they are able to sustain a great result over multiple consecutive snapshots. For example, a team scores a streak when it manages to improve “Team Morale” three times in a row. However, the streak is broken if the score goes down in between. It will be challenging to maintain long streaks, but very rewarding too.
- There are two types of streaks. The first (red) is awarded when teams are able to improve their score on important factors between consecutive snapshots. The second (blue) is awarded when teams score above the benchmark on important factors between consecutive snapshots. Teams can also get both if they do really well.
- Teams now earn badges for actions that encourage improvement and transparency. For example, we offer a badge when teams invite a certain number of stakeholders to join their snapshot. We also offer a badge when a certain number of actions is marked as “completed”. Badges have five levels (bronze, silver, gold, platinum, and diamond), each with a clear threshold.
- You can now click on any streak or badge to see what your team needs to accomplish to get it. For badges, you can also see what is needed to go to the next level.
- We redesigned the badges and streaks to make them more playful and inviting. Our designer Wim Wouters did a great job here!
The team report now also tells you when your team has earned badges or managed to gain streaks:
The badges and streaks are also visible for teams in the Team Dashboard:
We are stoked about this new badge system. It is much easier to understand and is far more rewarding for continuous improvement as it happens. We hope that it gives your team more opportunities to celebrate, and also more clear challenges to pursue in your journey towards increased Agility. Please let us know what you think at email@example.com.
April 29: Feedback for Agile Teams
In this Sprint, we completed rewriting all feedback texts to also suit Agile teams. We now offer detailed feedback, scientific background, and tips on how to improve in over 26 different areas. In addition, we:
- Updated the Do-it-yourself workshop to diagnose your team with the Scrum Team Survey to reflect recent updates;
- We completed and submitted the third iteration of the academic paper “A Theory of Scrum Team Effectiveness” that we wrote with Professor Daniel Russo. The paper has been extensively expanded based on feedback from academic peers.
- Added over 20 Quick Tips specifically for Agile Teams;
Next Sprint, we aim to redesign the way the badges work in the Scrum Team Survey. We want to make it more rewarding for teams to improve continuously, and we want to design the badges accordingly.
April 22: Actionable feedback, also for Agile teams
This Sprint, we continued our efforts to offer more support for Agile teams (not just Scrum). We are currently rewriting the feedback to make it more relevant also to other kinds of Agile teams. This also allowed us to make other improvements:
- Under “tips”, we now offer a succinct summary for each tip.
- Each tip is now accompanied by scientific support where it is available. Interested readers can read the academic studies we’re basing the feedback on, or use it to start conversations in their organization.
- The “How to improve” section for tips is being rewritten in a “First [….], Second […], Third […] format”. This offers broad direction on how to improve in an area. We also offer many quick tips to get started right away, as well as do-it-yourself workshops to dig deeper into areas.
- We’ve also begun to add quick tips for Agile teams.
April 15: Support for other kinds of Agile teams
Today we release the first increment of the Scrum Team Survey that can be used by Agile teams that don’t use Scrum. For example, XP teams, Kanban teams, or teams that developed their own Agile approach.
We present Agile teams with a questionnaire that is very similar to the one we already use for Scrum teams. Many areas, like psychological safety, team morale, and value focus are important regardless of what type of Agile team you have. In fact, 95% of the questions are the same. There are three areas that we changed for Agile teams, as compared to Scrum teams:
- Agile teams don’t get questions about “Sprint Review Quality”. Instead, we more broadly ask if teams actively seek feedback during iterations. We call this area “Feedback Gathering” (although this may change).
- Agile teams don’t get questioned about “Sprint Retrospective Quality”. Instead, we ask more broadly about retrospectives and how they use them. We call this area “Retrospective Quality”.
- Agile teams don’t get questioned about “Sprint Goals”. Instead, we ask questions about the presence and quality of shared goals. We call this area “Shared Goals”.
These changes are also reflected in the report that your teams receive:
We assume that Scrum and Agile teams are actually very similar in the scientific model that underlies the Scrum Team Survey. Until the data tells us otherwise, we aggregate both Scrum and other Agile teams in organization-wide reporting. However, if you like, you can filter on team type by setting a team filter in the team dashboard:
We want to thank our friends from Applaudo for their help in identifying the questions that are not suited to Kanban teams. We used their input to tailor the questionnaire for Kanban- and other types of Agile teams.
How about Agile teams without iterations?
Sharp readers will notice that we still assume an iteration-based cadence for Agile teams. This may not be the case for pure Kanban teams. However, we’ve made a pragmatic trade-off. Teams that use a pure flow-based approach without iterations are probably so different that they may struggle with other questions as well. Our feedback may also be less applicable to them. Plus, we know that most Kanban teams still use iterations of some kind. However, we expect that 95% of the questions still apply to even the purest-of-pure Kanban teams. Questions that are truly not applicable can be skipped easily. Again, we’ll the data and your feedback help us understand where we have to ramp up support for pure Kanban teams.
Experimental feature: some limitations apply
Our support for other kinds of Agile teams is brand-new. We will use the coming Sprints to incrementally tailor support. This means that some limitations currently still apply:
- The feedback and terminology may still reflect the Scrum framework. We will use a coming Sprint to tailor feedback to Agile teams, generally.
- The do-it-yourself workshops and quick tips still use Scrum terminology here and there. However, we expect the gist of the content to remain the same. We will update this more fully in the coming Sprints.
- The questionnaire for Agile teams is almost identical to the one we use for Scrum teams. However, we will need to re-validate the questionnaire fully for Agile teams. This requires a lot of data (at least 500 teams), so until then we can’t promise that the Agile team questionnaire is as accurate as the Scrum team questionnaire is. We expect it to be similar, but we need data to confirm that.
- Obviously, we will be changing the name of our platform in the coming period to reflect the more broad support for Agile teams.
Other changes this Sprint
We also used this Sprint to launch an increment that resulted from a significant refactor. We won’t both you with all the technical details, but we removed two layers of “legacy”-code. Aside from a big performance boost in the questionnaire and the reports, it is now also easier for us to make many of the required changes we intend to make now that the questionnaire is becoming more flexible.
April 8: Support for Scrum teams that don’t develop software products
“How can I use the Scrum Team Survey with a team that uses Scrum, but doesn’t develop software?”. This week we updated the Scrum Team Survey to address this question. Teams can now specify if they are a software team or not. We adjust the questionnaire accordingly.
Although 95% of the questionnaire remains the same, we remove questions for non-software teams about release automation. We also adjust four other questions to make more sense also to non-software teams (e.g. “release to production” is changed to “release to stakeholders”).
The type of team can be changed in the Team Dashboard (available to subscribers). We also add a tag to each team to indicate their type. This makes it super simple to filter your results and reports for only “software Scrum teams” or “non-software Scrum teams”.
The changes we made to our platform also allow us to support other Agile teams in the near future. So if your team uses XP, Kanban, or another Agile methodology, it will soon be possible to use our platform to help your teams improve also. And we obviously need to change the name of the platform by then :D
We also used this Sprint to:
- Add over 6 new Quickstart Workshops to the feedback sections of the Scrum Team Survey.
- We resolved a bug in the anonymization procedure that incorrectly hid the results for teams where a sufficient number of team members, but only 2 stakeholders, participated. Thanks to Applaudo for pointing this out.
April 1: Small teams and teams with only a few stakeholders can now also see detailed results
Hi there! This week's update is no joke — fortunately. We improved our anonymization procedures to accommodate smaller teams or teams that are unable to invite 3 or more stakeholders. Up until now, we would suppress the results for groups with fewer than 3 participants (so team members, stakeholders or supporters). With fewer numbers of participants, it probably wouldn’t be hard to “reverse-engineer” who gave which answer.
However, many teams pointed out that there is no realistic way for them to invite 3 or more stakeholders, supporters, or even other members of their team. So a lot of potentially useful data would be lost to them, even when everyone was fully on board with sharing their results.
So we changed our platform to ask for an explicit opt-in from participants to share their scores — but not their identity. This allows us to share the results for groups smaller than 3 participants, provided that everyone opted-in to this. Beyond 3 participants, we apply our regular anonymization procedure.
We also addressed several other points this week:
- When you subscribe to the Scrum Team Survey with a discount code that gives a 100% discount — like our patrons — you don’t have to go through the payment screen, and the payment confirmation anymore. So after entering your details and the coupon, your subscription is created immediately. We noticed that our initial solution was a bit unclear.
- We updated ScrumTeamSurvey.org to show some of the logos of the customers we’re supporting. If you work for a company that also uses the Scrum Team Survey, we’d be honored to show your logo too.
- A lot of our Sprint was spent on administrative changes. We’ve moved the operation of the Scrum Team Survey into a separate legal entity. This simplifies taxes, liability and gives us more flexibility. The only thing you’ll notice of this is a slightly different name on the invoices.
March 25: See where your team is improving from earlier snapshots directly in the team report
This week we’re bringing a lot of cool improvements to the Team Report. This is the report you receive after participating in the questionnaire.
1. See how your team improved since a previous snapshot
We now show the improvements of your team since a previous snapshot directly in the team report (e.g. behind “Stakeholder Concern”). We also show it under “Model”. You can benefit from this information by re-taking the questionnaire every now and then with your team by setting a reminder from the report. When the right time has come, we send you a link to re-take the questionnaire and invite your team to a new snapshot. Subscribers can do this more easily and quickly through their Team Dashboards.
2. Track open, closed, and impeded actions more easily
We also clarified how many actions your team is undertaking to improve. Per factor, we now show the number of open, closed, or impeded actions directly under “Model”. You can also filter on the action status.
3. More guidance on what to focus on in your improvements
We also improved how we present feedback. This was a bit overwhelming earlier. Similar to the Team Dashboard, we now show the three most important “Focus Areas”. From there, you can click on the details of how you can improve. The feedback is also presented in a more structured manner now, with additional workshops and quick tips. Teams without a subscription get actionable feedback on the 3 most important areas, whereas teams with a subscription can see all feedback.
4. See the results by segment (team members, stakeholders, supporters) in the Team Dashboard
Although most of our focus this week is on the team report, we also added a small detail to the Team Dashboard. It is now possible to see the results for a factor by the group of participants; like stakeholders, team members, and supporters. This provides more actionable information as to where gaps exist.
We also addressed other issues:
- For new subscribers, the team dashboard now defaults to show the 5 core factors in the summary and not all 20+. Thanks, Gowen for reporting this.
- In the Team Dashboard, when you select “Don’t compare”, we now show the results for the past 12 months without comparing it to a previous time period. Thanks to the team from Applaudo for helping us improve this.
March 18: Evidence-based feedback to build more support in your organization
This week, Barry Overeem and I continued in our effort to make it easier for teams to marshall organizational support to help them. This fits very well with the short survey that we released last week, and that allows teams to invite their supporters — managers, coaches, and leaders — to evaluate their level of support for a team. Any mismatches between the support that teams experience and what is given to them are great starting points for conversations.
At the same time, we also know how hard it can be to marshall help. So we added evidence-based feedback for six areas: 1) supportive leadership, 2) support for continuous improvement, 3) support for team autonomy, 4) support for responsiveness, 5) support for stakeholder concern, and 6) (understanding of the) benefits of Agile. For each area, we offer our feedback, several do-it-yourself workshops, and a number of “quicktips” to get spur your imagination.
We also addressed three bugs that were reported by users:
- The menubar on the left now doesn’t overlap in lower vertical resolutions Thanks to the teams at Omio for reporting this.
- The number of open interventions we report in the team dashboard now respects the team filter when one is set. Thanks to the teams at Applaudo for reporting this.
- We fixed the pagination on the “Actions” page in the team dashboard. It always showed the total number of actions and didn’t respect the filter (e.g. “only open”). Thanks to the teams at Applaudo for reporting this.
We also used this week to lay some groundwork for next week. We’ve vastly improved the reporting functionalities during our work on the Team Dashboard. We now want to update the Team Reports (for the free edition) to also benefit from some of these improvements. This will also make the reporting more consistent.
As always, please let us know what you think. If you run into issues, we’re happy to hear about those and address them in an update.
March 11: Ask managers, leaders, and other supports to evaluate their support for your team, and discover gaps
We know that support from management is essential to allow Scrum teams to work effectively. We have learned this from our experience with and as part of Scrum teams. And we know this from our on-going research.
So this week we’re launching a feature that allows teams to invite their supporters — leaders, managers, coaches, and other supporters — to evaluate if there is a gap between the support that teams need and what they get.
This new feature works in the same way as with stakeholders; teams can invite as many supporters as they want. When at least 3 participate, we show the average results in the Team Report and the Team Dashboard. We also a few extra questions to the Team Questionnaire to measure “support received”, so we can offset this with “support given” as provided by the supporters.
We’ve long been working towards this feature. It's also an experiment. Contrary to platforms with a similar purpose as the Scrum Team Survey, we purposefully don’t ask the supporters to evaluate the effectiveness of teams. This is entirely up to the stakeholders. Instead, we ask supporters to honestly reflect on how and where they support teams. We feel that the conversations that result from conversations between what teams need and what they are given are far more powerful. Obviously, it is also possible for one person to participate both as a stakeholder and a supporter — this is up to teams.
Let us know what you think of our new Supporter Questionnaire. Try it out with your team. You can use it for any team that has participated in the Scrum Team Survey, now and in the past. We show the detailed results both in the Team Report (for all users) and the Team Dashboard (subscribers).
Next week, we will also add detailed evidence-based feedback for the results from the Supporter Questionnaire. We will add do-it-yourself workshops to express needs more clearly and offer quick tips to get started right now.
March 4: Export team-level results for additional analyses
This week, we launched a feature to download team-level results from the Team Dashboard. You can find “Export” under the “Insights”-tab (see below). The export returns a CSV file that contains team-level scores for all teams together, and for individual teams, in a given date range. It is possible to export data of up to 5 years back, and grouped to 1 month, 3 months, 6 months, or a year. We hope that such an export makes it possible to perform additional analyses not provided by our platform or to import it into your own analytical tools like PowerBI or Tableau.
In addition to this, we also
We also made some minor improvements based on feedback:
- We clarified how we display date periods for data selection. Instead of “Dec, 2022”, we now show the full date (“e.g December 2”). The year is only visible when the date range falls on the boundary for years.
- There was a bug that made all data in the model disappear when you selected “Don’t compare with a previous period”. It will now show the data for the past 30 days.
February 25: A more user-friendly way to invite your team and its stakeholders
This week, we improved the flow to invite your team and its stakeholders. Before, the list of participants for a snapshot did not include the invitations you’d send out. This made it difficult to follow up with your team.
The reason for this admittedly awkward flow is the mandatory European privacy protection (GPDR) that we have to (and want to) comply with. This blocks us from saving e-mail addresses and other personal information until we have explicit approval from the person in question. So while it is possible to send an invitation email to an email address, we are not allowed to store that email address.
However, we found a way to still show the open and completed invitations. So the Team Dashboard now shows which invitations you’ve sent as well as their progress:
Note that we show only a strongly obfuscated e-mail address for each invitation, just like for participants who provided their e-mail address. This should provide you with just enough to keep track of who participated and who didn’t, without requiring us to store sensitive personal information — like an e-mail address. In fact, we only store this obfuscated version and nothing else.
“But why can’t I just enter the names of the people I invite?” is an obvious question. We considered this. But unfortunately, it would also violate the GDPR guidelines.
In addition to inviting your team and its stakeholders by e-mail, you can also (still) share the invitation link that you find in your team report and in the team dashboard. You can also both send invitations and share the invitation link.
In addition to this, we also addressed the most critical points from the Sprint Review we organized last week (recording available here):
- In the team dashboard, it is now also possible to not compare the results from the current period with any other period.
- In the team dashboard, we now show the names of the teams that are included in the current and the previous period. This gives you a better sense of where the data is originating from.
- In the team dashboard, we now show the “tags” you are filtering in the header. We also clarified the name of the button to filter to “Team filter”.
- In the team dashboard, the totals we report for actions (open, completed, impeded) are now based on all actions. Before, we only calculated these numbers for the current period. But users found this confusing.
- In the team dashboard, we now mark participants in the Snapshot that completed the questionnaire far more quickly than average (<5% percentile). This may lead to inaccurate results.
- The “Community”-tab from the Team Report is now also available from the Team Dashboard. In addition, there is also a link to join our Discord server from there. Feel free to use this yourself, or with your team.
February 17: Group teams in the team dashboard, and filter the results by group
With the Team Dashboard, organizations can support the continuous improvement loop of many Scrum Teams. Until this Sprint, we used to aggregate the results for all teams under a subscription. But several customers requested the option to also see the results for particular groups of teams— for example, all teams in a value stream, business unit, or by type. This makes the results more meaningful when you have very different groups of Scrum Teams in your organization.
So starting this Sprint, you can assign one or more tags to teams:
The results can then be filtered by selecting one or more tags. This affects everything you see in the dashboard and under the various tabs (e.g. “Tips”, “Trends”, “Insights”).
We also addressed several other issues:
- We no longer automatically clean up Snapshots when nobody participates within a few days.
- We discovered that we accidentally removed the feedback for teams with below-average results on “Refinement”. This is back now.
Coming next Sprints …
- Next Sprint, we will implement a better way to invite members of your team from the Team Dashboard. The current way is somewhat clumsy because the GDPR does not allow us to store e-mail addresses when people haven’t opted in.
- The Sprint after that, we will implement a way to export the snapshot-level results for one or more teams to CSV.
- The Sprint after that is still up in the air, but we will either add a questionnaire that is aimed at management and those who support Scrum Teams, or we will implement a PDF report that can be exported for one or more teams from the Team Dashboard.
February 11: View evidence-based feedback for multiple teams in the Team Dashboard
We are continuing towards our goal of expanding the Team Dashboard and making it the hub for ongoing continuous improvement. This week, we added evidence-based feedback to the Team Dashboard (under “Tips”).
The feedback is similar to what we already presented in the report for individual teams, but then for all teams under your subscription:
We also took this opportunity to simplify the feedback. Instead of overwhelming you with dozens of tips, we now start with the 3 that we expect to be the most fruitful to invest in. We also more clearly connected the feedback to the statistical model behind the Scrum Team Survey (as seen under “Insights”).
Finally, we also redesigned how we present suggestions for how to improve. Below is an example for the factor “Team Autonomy”. Under “How to improve”, we show our suggestions, a number of quick actions, and a selection of relevant do-it-yourself workshops. We now also offer more detailed suggestions on which other areas to invest in based on the analyses we’ve done on our dataset. Because every team and organization is different, you should always treat these as suggestions — pick what makes the most sense to you. Our hope is that this feedback provides some direction in the complexity of real-life Scrum.
February 4: View improvement actions from multiple teams in the Team Dashboard
Our dream for the Scrum Team Survey is that it acts as the central hub for ongoing and continuous improvements. This is why we encourage teams to define “improvement actions” after they’ve taken the Scrum Team Survey and analyzed the results. Until now, those improvement actions were only accessible from the team reports.
Starting today, you can now also access all actions from all teams under your subscription from the Team Dashboard:
By having all actions in a single place, it is now easier to identify patterns, bring support to teams who need it, and improve coordination between teams.
You can now also jump straight to relevant actions from “Organization Insights”. So if you see that 4 actions have been identified by teams to improve “Stakeholder Collaboration”, you can now jump straight to those actions.
Teams can now also mark actions as “impeded”, which indicates that they need help from outside the team (e.g. coaches and management) to make progress. Impeded actions are highlighted in the overview of the Team Dashboard and under “Insights”:
Coming next Sprints …
- We will extend the Team Dashboard to also provide evidence-based feedback based on the results from all participating teams. This makes it easier to identify broad patterns (e.g. stakeholder collaboration is difficult). We will also offer the most relevant do-it-yourself workshops as recommendations, which can then be added as “actions” again.
January 25: Discover where teams are improving
In the team dashboard for subscribers, we now also show a selected benchmark with a yellow dot. Benchmarks are real-time samples of 100 top teams on a certain characteristic (e.g. experience, effectiveness, responsiveness). You can select a different one under “Settings”. Although we don’t recommend placing too much emphasis on benchmarks — every organization is different after all — they may help in the identification of areas that are in need of support.
If you use the Scrum Team Survey periodically with your teams, you can now also set a different comparison period. For example, you can compare the results of the past 4 months with the 4 months before that. Or 6 months, or 12 months. You can change this with “Settings”.
Under “Filters”, you can now also select whether you want to see all the actions per factor, just the open or completed ones, or none at all. We will soon also add this to the team reports.
With “Settings”, you can also change which factors you want to see in the summary overview of the team dashboard. The default is the six core factors, but you can select all 25+ factors or just a single one of them, based on where you want to focus on.
We also addressed several smaller issues that were brought to us by users:
- The model visualization now hides all results from stakeholders when fewer than 3 have participated, even when more than 3 people have participated in the snapshot.
- The “How to improve” section for “Stakeholder Happiness” was empty when your team scored lower than other teams. But no more!
January 21: Discover where teams are improving
We always intended the Scrum Team Survey as something that is part of the continuous improvement process of teams and organizations. Today marks the release of an important feature for that purpose.
In the Team Dashboard — available to subscribers — you can now see how multiple teams are improving on our evidence-based model for Scrum Team effectiveness. This model is very useful when you regularly use the Scrum Team Survey with one or more teams.
From the model, you can tell:
- Which factors are improving, decreasing, or remaining the same?
- You can view a breakdown by teams to see which are more in need of help and support than others.
- You can see how many actions have been identified by the teams to improve on each area. This essentially provides a visualization of the “adaptations” that are underway.
We hope that this visualization makes it easier to see — on the organizational level — patterns in the results. We also hope it makes it easier to both identify improvements and determine if they actually worked. Continuous and evidence-based improvement!
This Sprint, we also updated how we present a summary of results in the Team Dashboard’s overview. Wim Wouters worked with us to come up with a better way to provide a simple overview while also respecting the complexity of the data underneath. What was extra fun, is that we shared four different designs with users this week. We picked the favorite and tweaked it a bit further into this:
We also used this week to squat an annoying bug that broke some of the downloads for do-it-yourself workshops. A friendly user pointed this out.
Coming next Sprints …
Since we deliver a new increment every week, this increment does not yet contain all the features and possibilities we intend. So in the coming weeks, we aim to release:
- From the team dashboard, you will be able to browse all the various improvement actions from all teams under your subscription.
- From the team dashboard, you will be able to see broad recommendations for your organization based on the results from all the teams.
- In the model visualization that we delivered this Sprint, you will be able to customize the data selection. You can select different date ranges, selections of factors, show only open/closed actions or select sets of teams.
- In the team dashboard, you can create groupings for teams. So it will be possible to create groups for different value streams, products, or departments. This is particularly helpful when you have many teams in your subscription.
If you have additional ideas or feedback on what we delivered this Sprint, please let us know at firstname.lastname@example.org.
January 14: Organization Insights
Yay! This week, we’re launching our first increment of Organization Insights. You can find it in the Dashboard that is available for subscribers. With Organization Insights, we make it easier to get a sense of how multiple teams are doing. This is great news for our customers with multiple teams. And a good reason to subscribe if you are currently using the free version with multiple teams.
The dashboard now shows the average scores of all teams under your subscription. This is currently limited to the core features of our Scrum Team Effectiveness model. You’ll be able to add in upcoming increments. The average is based on the past 3 months.
We also want to emphasize that numbers are nice, but there is a whole story behind them. So below the scores you also find the spread of scores. Each dot represents a team. This allows you to support teams that would otherwise be missed. For example, the 68 for Management Support is pretty good. But there is one team that is clearly struggling to find support. The dots allow you to discover those teams.
Coming next Sprints …
This is only the first Sprint that we’ve spent on Organization Insights. We will use the coming Sprints to expand and evolve the following features:
- More options to filter, select and add which scores you want to highlight in the dashboard.
- See the results for all teams, or a selection, projected onto the Scrum Team Effectiveness model we also show in the team reports.
- See patterns in the improvement actions that teams are undertaking, so as to identify where more support is useful.
- Download a PDF with the results for easy sharing across teams.
- Management can also participate in a short questionnaire that aims to measure how well they think they are supporting various teams in each area of our model. We want to highlight where gaps exist between the (perceived) provided and received support.
January 10: Changes to our questionnaire
We’re back from our Christmas and New Year break! To start off the new year, we made a few minor changes to our questionnaire:
- In total, we removed 9 questions that didn’t statistically contribute much to the information already available from other questions.
- We removed the scale “(Lack of) Task Conflict”. Our statistical analyses based on 680 teams showed that this scale was statistically indistinguishable from the scale “(Lack of) Relational Conflict”. This means that when teams score high on one, they score similarly high on the other and vice versa. This is consistent with recent studies. So there’s no value in keeping both around. We renamed “(Lack of) Relational Conflict” to “(Lack of) Team Conflict”.
- We removed the scale “Product Discovery”. Statistically, this scale was so strongly correlated with our existing scale “Stakeholder Collaboration”, that there was no value in keeping both in. We decided to add two questions from the “Product Discovery” to our existing “Stakeholder Collaboration”-scale, and removed the third question altogether. Effectively, this means that our scale for “Stakeholder Collaboration” now measures more aspects of how teams collaborate with their stakeholders.
These changes make our questionnaire more precise and accurate. We are currently regenerating all profiles to benefit from this increased accuracy. Because we only remove questions that minimally impact the results, any changes to your team scores are minimal.
December 24: Introducing our Community
What do you do as a Scrum team when you are utterly and completely stuck? We believe in the power of the community to overcome hard challenges, find inspiration and support.
So this week we launched our “Community”-section to make it easier for Scrum Teams to find help, from us our broader community. Our aim with the Community section is to:
- Make it easier for all team members to engage with our broader community of Scrum and Agile enthusiasts, and peer team members.
- Enlist our help in case of persistent challenges or tough impediments.
- Connect with local user groups to find further inspiration and motivation. We have 12 at the time of writing, in places such as Spain, Turkey and Mexico. More user groups are starting up at this moment.
- Join community meetups that are aimed to inspire and support, and as a place where you can meet peers.
This is admittedly our first iteration; we have many more ideas and concepts we’d like to develop further. But lets start small :)
Another milestone we achieved this week is that the new Questionnaire module is now in active use. The decision to switch from the old to the new version for all users was made during a Sprint Review we hosted this week:
Finally, we performed maintainance on our servers and upgraded all remaining services to .NET 5.
Happy Holidays, And See You In The New Year!
Thank you for your continued support in 2021. We had a blast developing the Scrum Team Survey with your feedback. We’ll make it even better next year!
Coming Next Sprint …
We will be taking a two-week break to recharge. Building the Scrum Team Survey is super awesome. And it is also good to recharge after making many long weeks. We’ll be back in the second week of January. Our aim then is to start exploring and implementing features that are helpful to help more teams in the same organization. This allows us to start work on a new feature while learning more about what else is valuable for the Community section. So if you have ideas, we’d be happy to know.
Christiaan & Barry
December 19: Initial concept for “Community”-page
We used this week to develop the concept for a new feature of the Scrum Team Survey: Community. We strongly believe in the power of communities to get help, find inspiration and offer support. So we want to make it easier for Scrum teams to access resources in a broader community. This may also help Scrum teams that are completely stuck, and don’t know how to improve. Our designer, Wim Wouters, developed the first concept for this this Sprint:
The initial concept evolved during the Sprint, and we will use the next Sprint to actually implement and deliver a working version of it.
Because much of the work this Sprint involved product discovery of what exactly the “Community”-feature should become, we used the time we had left on small nuts and bolts, and some refactoring work:
- We updated all core services from .NET 3.1 to .NET 5.0.
- We greatly improved the testing suite that is responsible for automatically testing the various services that make up the Scrum Team Survey.
- We updated the “research”-section on our website to update you on the most recent iteration of our research project.
- Our research efforts also benefitted from some extra time. We calculated a Gini-Simpson Index to assess the diversity of teams on a number of characteristics. In coming weeks, we will use these indices to test whether or not diverse teams are more effective, and which areas are most important.
- We suffered a 35-minute outage on December 17th, at 03:15 AM (CEST). This was caused by an issue with the storage services at our hosting provider TransIP. Fortunately, the issue was resolved quickly.
Coming Next Sprint …
The goal for next Sprint is to implement the conceptual design for the Community-feature in a first working version. In addition to this goal, we also hope to turn on the new Questionnaire module for all visitors. Whether or not we do this depends on the outcome of our Sprint Review with users. Please send an email to email@example.com if you’d also like to join.
December 11: Alerts and 5 new Do-it-Yourself Workshops
This week, we are releasing “Alerts”. Our aim with this feature is to inform you of important events and reminders without the need for e-mails. Examples of such alerts are new participants, expiring subscriptions or reminders that you’ve set. We currently show relevant Alerts in the reports, and will extend this to the Team Dashboard shortly. Most alerts can be dismissed, although they remain available in the alert archive for a few months. We may also issue Alerts to inform you about platform maintainance, new updates, upcoming Sprint Reviews and community events.
We also created three new do-it-yourself workshops centered around the theme of “Product Discovery”. We know from the data that many teams struggle here, so each workshop provides a tangible way to engage in this important process. Two workshops are free for all visitors, whereas the other three are available for free by patrons and subscribers, or for a small price by regular visitors:
- Workshop: Interview Your Stakeholders And Learn What Matters Most
- Workshop: Discover The Needs of Your Stakeholders With UX Fishbowl
- Workshop: Use Ecocycle Planning To Make The State Of Your Product Transparent
- Experiment: Generate Insights And Ideas For Your Product With “Design The Box” (free)
- Experiment: Create An Empathy Map To Articulate Your Customer’s Understanding Of Your Product (free)
If you’re a subscriber to the Scrum Team Survey, you can find the new workshops in any new report (under “Tips”).
Coming Next Sprint …
The coming two Sprints, we will be implementing a range of features to make it easier for teams to find help in the community. This includes the option to join inspirational meetups that are focused on areas of the Scrum Team Survey, the option to engage with experienced mentors and coaches and reading material.
From this week on, it is possible to change your answers to prior questionnaires you participated in. A request can be made through https://questionnaire.scrumteamsurvey.org, and soon also via the report page. This is only possible for participants who left entered their e-mail address. The reason for this is that the e-mail address is the only thing we have to identify who a set of answers belongs to.
The new questionnaire module is still running in parallel with the earlier version. We aim to switch over entirely next week.
Here some of the other updates:
- We also added the “add as action”-button to the do-it-yourself workshops that are suggested in the team reports.
- The option to unsubscribe to certain notifications is now more fine-grained. We noticed that some people opted out of notifications, and then contacted support because they didn’t get the report by e-mail.
- We are in the process of updating all do-it-yourself workshops to fit the theme and style of the Scrum Team Survey. Earlier, we used the Zombie Scrum theme as this made sense with earlier iterations of our platform.
Today, we released a new and improved interface for our questionnaire. Wim Wouters from Fonkel.io created a new design and user experience for us. Our goal was to make the experience smoother, simpler and friendlier.
We integrated common feedback on the entry process. There is much less text. The interface is more mobile-friendly. It is easier to restart, go back or try a demo questionnaire. The new interface also provides definitions for terminology that isn’t always 100% clear to participants, like ‘Sprint Review’ and ‘Stakeholder’. We also show how many questions are (accidentally?) unanswered.
The new interface is available as a fully-functional beta, alongside the current version. You can pick the one you you prefer. You can try the new version here. We will switch over a few weeks from now, provided we don’t run into issues and the feedback is positive. So we’d love to hear what you think.
In addition to the visual changes, we also made changes to our back-end services. Most importantly, our questionnaire module is now far more flexible to the number and kind of questions that are sent to participants. This paves the way to provide shorter versions of the Scrum Team Survey questionnaire. We can also to hide certain questions or present even entirely different questionnaires (e.g. Agile Team Survey, Kanban Team Survey, and so on).
- MAJOR: In an effort to make the Scrum Team Survey results more actionable, we added “Quick tips” to (for now) the six core factors of our model. This is particularly helpful for teams that need some inspiration on how and where to start and find the do-it-yourself workshops too daunting. If there’s an action you like, you can add it to your “team actions” with the “add as action” link straight away. We will be adding more quick tips soon with help from our community. This feature is part of the free tier for now, but we will eventually limit it to 1 tip as part of the free tier.
- MINOR: We now show the number of uncompleted actions with a red icon in the sidebar.
- REFACTOR: We refactored the report website so that all write operations happen asynchronously. This reduces the load on our API and makes the experience in the report smoother. This also allows us to offload certain operations to specialized microservices.
- MAJOR: We released an experimental new feature today that should answer a common question: “How do all the results tie together?”. We now effectively offer a visual representation of the results as projected on the theoretical model we’ve developed for effective Scrum teams. This research allows us to show how factors are connected in many teams. This information can make it easier to pinpoint where to improve most urgently for your team. We also recognize that any model is limited — including ours — so it should be considered with considerable nuance. We will use the following weeks to work with the community to see if this model helps and what we can do to prevent incorrect use.
- MINOR: The “Support” and “Feedback” buttons in the side navigation are now merged.
- MINOR: All reports now include a “Team Effectiveness Score”. This score represents the degree to which a team is capable of delivering value to stakeholders and their own team morale (as “internal” stakeholders). We feel that if teams and organizations want to focus on one score, this is the most useful one as it puts both value and morale at the center of attention. Our model also emphasizes that this score is always the result of a larger system, and thus needs to be taken together. This replaces the earlier factor called “Valuable Outcomes”. The difference is that “Team Effectiveness” is a composite score for Team Morale, Stakeholder Happiness and the Stakeholder Happiness as reported by stakeholders themselves.
- MINOR: It is now possible to invite participants for snapshots straight from the Team Dashboard:
- MINOR: It is now possible to remove snapshots in the Scrum Team Dashboard. This permanently removes the snapshot and all surveys associated with that snapshot — so use with caution.
- MINOR: You can now add a personal note to the start of a survey. The note only shows for team members; not stakeholders. This feature is part of the Scrum Team Dashboard:
- MINOR: We revamped the page in the Scrum Team Dashboard where you can start a new snapshot for a team. You can now invite both team members and stakeholders right away. Or you can do it manually.
- MINOR: It is now possible in the Scrum Team Dashboard to track how many people have completed the survey compared to the total number of participants you expected. This was a common request from users.
- REFACTOR: We implemented a test suite to test the integration of the backend services that make up the Scrum Team Survey.
- BUG: We fixed an issue that sometimes caused notification e-mails to stop for snapshots that were started from the Scrum Team Dashboard. We set this e-mail address to the e-mail address of the first participant of a snapshot. In the Scrum Team Dashboard, you can set an e-mail address even before participants take place. The issue here was that that e-mail address was still overwritten by the one provided by the first participant.
- BUG: We fixed an issue where new subscriptions sometimes weren’t correctly registered and started. Stripe — our subscription platform — often “creates” new subscriptions that aren’t active yet (i.e. not paid right away). The Scrum Team Survey refused these new subscriptions because they weren’t “active” in Stripe yet and then refused subsequent updates for that subscription — such as “this existing subscription is now paid”.
- MINOR: You can add a new team straight from the team dashboard without having to go through the entire survey:
- MINOR: Teams can now also be changed from the team dashboard. It is currently possible to change the name and the color, but we will add more properties in the future.
- MINOR: We added a better header to the Team Dashboard. It tells you about recent updates, the number of teams used (and still available in your subscription), and important notifications.
- MINOR: We added a nicer account popup to the team dashboard. This popup has all the commonly used functions available in one popup (password reset, change subscription, sign out).
- MINOR: The team dashboard now shows the badges that the team earned for the most recent snapshot. This was a popular request by many users.
- MINOR: In the team dashboard, you can now see the progress of participants. This simplifies the process of identifying which participants could be removed (e.g. attempts that were started, but not completed). We show the progress bar until the survey is completed:
- MINOR: We also show a quick visual summary of the changes between snapshots. The names of the dimensions are visible when you hover over them.
- MINOR: The Team Dashboard is now styled similarly to the Scrum Team Survey Report. Over the coming weeks, we will be overhauling individual elements and adding new features based on a design by our designer Wim Wouters.
- BUG: We fixed a bug that blocked “new participants” emails after the first. The cause of this bug was one of the layers of protection we implemented to prevent duplicate emails from being sent out. This is now resolved.
- REFACTOR: We’ve greatly improved the response speed of the Scrum Team Survey. With the substantial growth of our database, we’ve changed how we store and load data from the database.
- MAJOR: You can now add more people from your change team (Scrum Masters, team members, coaches, management) to the Team Dashboard. This removes the limitation of having only a single account to access the Team Dashboard. The person holding that account often turns into the “administrator” for the teams, and this is certainly not in line with our principle of giving the teams full autonomy. So it is now possible to share this responsibility. You can add as many accounts as you have teams.
- MINOR: We now show a message when you’ve reached the maximum number of teams for your subscription, with the option to upgrade. The same goes for when your subscription is about to expire (within 30 days).
- BUG: We fixed an issue that caused the menu items in the team report to duplicate.
- MINOR: We clarified in the survey why it is helpful to still enter an e-mail address, even though it is now fully optional.
- MINOR: The field for the team name is now at the end of the survey. We noticed that quite a few participants don’t fill in this field right away, but then forget to go back and enter it. We’re experimenting with a version where the name is now at the end.
- MINOR: We changed the names of the tabs in the survey to correspond to the core factors we also present in the team report.
- MINOR: We updated the notification that the starter of a snapshot receives for new participants. The URL in the notification now leads to the team report.
Although I’m on holiday, I still addressed some issues as they were discovered
- BUG: We discovered that the activation code was no longer usable after the subscription was changed. The issue was fixed.
- BUG: We fixed the duplicate “How to improve” link that sometimes showed up in the report.
- MINOR: Subscribers can now more easily re-invite teams for new snapshots from the Team Dashboard. Multiple snapshots for a single team can be used for the trend analyses we also offer to subscribers.
- MINOR: E-mail addresses are no longer required in the surveys. This is part of our push to emphasize anonymity. Even though they are no longer required, we still recommend if you 1) intend to invite other members of your team, 2) set a reminder at some point, or 3) want to be able to change your responses at a later date. After all, we need an e-mail address to send you a notification or a link to change a survey you completed.
- MINOR: We now show explicitly to all members which questions were already answered by other members in a team, and don’t make sense to ask again to everyone. Team Size and Organization Sector are examples of such questions. We also reuse answers across snapshots for the same team. With this update, we also show specifically which questions we re-use and offer the option to change the answer.
- MINOR: The Team Dashboard now allows teams to remove any participant from a snapshot.
- MINOR: When only 2 participants have participated in a team, we now also hide the scores for badges. Badges are visible for 1 participant or for 3 or more. This is to protect in scenarios where only one other person participated.
- MINOR: It is now possible to share a team report. Many people asked for a report that can be used in Sprint Retrospectives and doesn’t show the personal scores of the participant who is sharing it. We added a “Share” option in the menubar. Team reports become available when at least 3 members participate, so as to protect anonymity.
- MINOR: Because the number of customers is growing rapidly, we spent this Sprint mostly on automation of administrative tasks. For example, invoices are now imported into our accounting system automatically, and payments are reconciled automatically as well. This doesn’t immediately add value to our users, but it saves us valuable time (and mistakes) that we can now spend elsewhere.
- MINOR: We changed the names of various areas to be more consistent with our research. “Ship it Fast” is now “Responsiveness”, “Build What Stakeholders Need” is now “Stakeholder Concern”, “Improve Continuously” is now “Continuous Improvement”, “Self-Organize” is now “Team Autonomy” and “Quality” is “Concern for Quality”. The new names more accurately describe what is measured and align with the naming we use in our scientific publications. This has no impact on the profiles or the scores otherwise.
- MINOR: We improved the measurement model for the survey we send to stakeholders based on the data from 460 stakeholders we’ve collected to date. We were able to reduce the stakeholder survey to 12 questions (from 17). The areas that we called “Stakeholder Experience: Responsiveness” and “Stakeholder Experience: Engagement” turned out to be mostly the same in the data, so we combined them into “Stakeholder Experience: Responsiveness”.
- MINOR: We removed the two-question scale “Team Value”. We used this scale to ask teams to rate the value of their own work. Data analysis showed that this scale was heavily biased and not a reliable indicator of actual value delivered to stakeholders. Teams that want to see how much value they are delivering should really ask their stakeholders with the Stakeholder Survey.
- MAJOR: We added the ‘Actions’-section to the Scrum Team Survey. Here, teams can keep track of actionable improvements they intend to take to improve the results. We have big plans for this section. We see it as a great way to drive evidence-based improvements and monitor how they actually improve results over time (or not?). However, we’ll first test if teams are actually interested in tracking improvement actions from the Scrum Team Survey. Experiment started!
- BUG: User account creation failed periodically. This issue happened every 24 hours and immediately pointed to expiring API tokens for Auth0. This baffled us for a while as our platform automatically refreshes tokens. But it turned out that the code responsible for user creation did not use new tokens even when available.
- MINOR: From now on, we will no longer add version tags. This made sense when our ecosystem primarily consisted of one core service. But now that our ecosystem is spread out over a dozen services — each with its own version — the use of version numbers in this changelog has become less meaningful.
1.0.422-production July 19
- MINOR: When you complete the survey, you are now automatically redirected to the results. Earlier, we e-mailed the link to the profile separately after completion. We implemented this flow because the generation of a profile (calculations, feedback collection) takes some time. However, e-mails are sometimes bounced by spam filters or corporate mail servers that decline external links altogether. When you complete the survey now, you are redirected to a “please wait” page that shows you the profile when generation is done.
- MINOR: We added more regions to the question about cultural background.
- MINOR: Downloadable resources in the feedback can now be downloaded directly. We removed Shopify from the process. Subscribers download all digital downloads for free. Non-subscribers can still access all downloads as well, but the priced downloads (like some of our DIY workshops) point to the Shopify page where you can buy and download it.
- MINOR: We removed the open questions for country and city. We replaced these questions with a new closed question that only inquires about the region — this is sufficient for our research purposes.
1.0.419-production July 15
- MAJOR: Today, we launched the new profile for teams. We listened to your feedback and made it simpler, more actionable, and slicker. Wim Wouters from Fonkel.io did a fantastic job. At least, we think so. We’re happy to hear whatever feedback you have for us. This new profile, and the associated style, will be applied to other parts of the Scrum Team Survey soon. The new profile also provides a preview of features we’re working on, such as “alerts” and a way to track improvement actions. All teams — including those that participated before — can now view their results in the new profile.
- MINOR: We’ve lowered the bar for teams to earn certain badges.
1.0.416-production-June 30, 2021
- BUG: We discovered that in some cases there is no e-mail set to notify when new people participate in a team. Normally, this should be the e-mail address of the first participant for that team. We patched missing values and fixed the bug that caused this.
- MAJOR: We implemented secure and more flexible user accounts for the Team Portal. Instead of rolling our own solution, we opted to implement Auth0 to offer a secure and well-tested login process. Auth0 also makes it easier to implement single-sign-on in the future and multiple users for larger organizations.
1.0.414-production-June 16, 2021
- REFACTOR: We improved the way in which data is stored to make the survey remarkably faster and more efficient. Previously, we stored all data about participants, snapshots, and teams in a single table. That worked well for a while, but the size of the database meant that queries started taking longer and longer. The structure in the database now follows the domain more clearly, which also reduces confusion and mistakes.
- MINOR: We’ve changed the styling of the Liberators Portal and the Team Dashboard to be consistent with the new homepage. The mobile experience is also better (though not perfect yet)
- ISSUE: We are investigating a potential memory leak in our API.
1.0.408-production-June 12, 2021
- MAJOR: We launched a new homepage for the Scrum Team Survey. Our new website more clearly explains what the survey is and how it works.
- MAJOR: We implemented Stripe to offer a more user-friendly way to let people buy a subscription. The previous process relied on our webshop and was — admittedly — clumsy and manual. It did allow us to test whether or not there was sufficient appetite for a subscriber model though.
1.0.398-production-May 18, 2021
- MINOR: We added a three-item scale for “Product Discovery”. This scale taps into the ability of teams to proactively discover the needs of stakeholders. It falls under the theme “Build What Stakeholders Need”.
- MINOR: We added a few more questions to existing scales to improve their ability to measure the right variables. The scale for Sprint Review Quality now includes questions about the usefulness of this event (thanks to a suggestion by Dave West from Scrum.org). The scale for Quality now also connects more strongly to the “Definition of Done”.
- MINOR: We improved the descriptions for the core factors (e.g. Ship It Fast, Improve Continuously). Aside from explaining why they are important, we now also explain what we measure to determine their scores.
- MINOR: We removed the ‘Short’ version of the survey. Over time, the short and long versions moved closer to each other in terms of the number of questions. We also observed that few teams actually used the ‘Short’ version in the first place. And finally, the “Short” survey doesn’t include questions about the valuable outcomes, which is one of the most powerful sources we have to give teams feedback on.
- MINOR: When you’re starting a new survey, the platform now checks if you’re part of the team or not. If not, the survey gives you advice on how to invite teams to do it themselves. Over time, we’ve noticed that people outside teams (e.g. coaches, management) start surveys for teams. While this isn’t necessarily an issue, we really want Scrum teams to stay in control over their own continuous improvement. Even when it is done for the best of intentions, starting a survey for another group of people takes away some of their ownership.
1.0.387-production-May 14, 2021
- MAJOR: Sometimes, we need to make a big change that has zero impact on your use of the Scrum Team Survey. We refactored our codebase this week to reflect the more hierarchical structure of data; from individual participants to snapshots, snapshots to teams, and teams to organizations. Although we used a temporary solution to virtually restructure the data to make it seem hierarchical, the solution was both confusing development and causing issues we wanted to avoid.
1.0.380-production-May 5, 2021
- MAJOR: We migrated the survey and associated sites from survey.zombiescrum.org to scrumteamsurvey.org. URL’s still pointing to the old domain will continue to work. This change is in response to feedback from several Scrum teams that they worry that “Zombie Scrum” will be off-putting to their members or stakeholders.
1.0.370-production-April 30, 2021
- MINOR: Based on popular request, we’ve re-added the markers for the average score of a team, even when you’ve toggled the option to see the range of scores:
- MINOR: Subscribers now receive an email when their subscription expires 7 days from now. This is a good opportunity to renew a subscription. We’ll automate this process eventually, with automated renewals, when there are enough subscribers to warrant the investments for this (which are quite high).
- BUG: The Liberator Portal used to show snapshots with 0 participants every now and then. This was resolved.
- BUG: We resolved an issue where some subscribers couldn’t log in to the dashboard since yesterday. It turned out that one of the data-migrations didn’t update one of the columns correctly (it did locally, not on production). Fortunately, we were able to resolve it quickly.
1.0.367-production-April 29, 2021
- MAJOR: We launched our subscription model this week. As a result, we changed the language from “boost codes” to “subscriptions” and “activation codes” in the interfaces. You can continue using the boost codes if you purchased those before the switch. All boost codes have been automatically upgraded to the Liberator-tier with a limit of 5 teams.
- MINOR: Following the change from boost codes to subscriptions, we added logic to our backend to track subscriptions.
- MINOR: We added a status page to reflect the maturity of the infrastructure and the platform. We did discover that we didn’t have health checks yet for the survey itself and for the portal, so those were activated too.
1.0.338-production-April 21, 2021
- MAJOR: The portal for Liberators now allows teams to track how their scores change over time, provided at least that they also participate in the survey every now and then. We implemented the first version of this feature (in 3 days), and will iterate over it and expand it further in coming Sprints — also based on your feedback. A boost code from our webshop is required to unlock this feature.
- MINOR: From the Liberator Portal, it is now easier to retake a survey with your team and effectively add a new “snapshot”;
- MINOR: When you retake the survey with your team, and you already have a boost code for your team, this code is now automatically applied to the new snapshot as well (this new snapshot used to be boosted as well).
- MINOR: Added a support page to Liberators Portal to make it easier for users to reach us with issues (and a great way for us to learn about errors and inconveniences).
- TECHNICAL: The Liberators Portal now has built-in “usage counting” to allow us to learn more about how you use our application. We don’t use third-party software for this. The only thing we count is how often a button is clicked (1 times, 2 times, 3 times, etc) by any user across the application — so its not tied to any person, session or login.
- MINOR: The process of retaking a survey is now better supported by the Scrum Team Survey. Instead of the regular start page for a new survey, we now offer a special start page for teams that are retaking the survey.
1.0.338-production-April 19, 2021
- BUG: The portal now displays the correct number of participants per snapshot. It used to included participants that didn’t provide any answers, and would’ve been cleaned after 14 days anyways.
- BUG: The portal now displays the correct number of snapshots per team. The earlier version included snapshots from un-boosted teams in the calculation. This is a rare condition, but it was an easy fix :)
- REFACTOR: The codebase for the Liberators Portal now performs all its integration tests (API and UI) on the Alpine-based Docker image that is built as part of the CI/CD flow, which brings the tests as closely as possible to the production environment.
- REFACTOR: We’ve started refactoring the codebase to an emerging domain-driven model where data is a hierarchy of: organizations > teams > snapshots > participants. Before, we abstracted everything into “Responses”, but that is increasingly causing confusion and potential for bugs. Because refactoring is done with a Strangler-like pattern, we can continue deploying without issue.
1.0.334-production-April 9, 2021
- MAJOR: Today, we released a much-requested feature to “manage” your teams from a single dashboard. We’ve dubbed the Liberators Portal. It is only accessible for licensed teams that have been boosted with a boost code from our webshop. Your dashboard shows all teams that you used the same boost code for, and allows you to start new snapshots (surveys) for a team, to add new teams or to remove responses from surveys (e.g. because they were tests or mistaken). In the coming weeks, we will expand this portal with more features, as well as general improvements to the user experience.
- BUG: Thanks to a report from a Scrum Master, we discovered that one of the questions in a new version of the “Release Automation” was accidentally reverse-scored. We corrected this issue and regenerated all reports.
1.0.329-production-April 1, 2021
- MINOR: Whenever you boost a team, every member in that team now receives a nice email to explain the various benefits and premium features that are now enabled for them.
1.0.328-production-March 30, 2021
- MINOR: We’ve added a simple feedback form to collect feedback in a more structured format. The form also includes upcoming ideas for features.
- MINOR: We noticed that some design elements started going in different directions, and we harmonized those again for consistency.
1.0.317-production-March 30, 2021
- MINOR: In an effort to reduce the “drop-off” rate for participants, we’ve analyzed at what points in the survey this usually happens. We changed the order of the tabs and removed some unnecessary questions.
1.0.315-production-March 29, 2021
- MINOR: It is now possible to skip questions in the survey if they are not relevant to your situation. Although this was technically already possible by simply not providing an answer, this wasn’t clear in the interface. So we added an explicit “doesn’t apply” option to questions where it is relevant.
- MINOR: Streamlined the messages that sometimes appear in the profile to indicate that few people participated, or that you need to invite stakeholders.
- MINOR: We added a progress bar to the survey, which means it easier to see how much you have left to enter.
- BUG: On the homepage, the legend incorrectly showed the wrong color for teams (pink), where it should’ve been blue.
- BUG: It is now possible again to resume a survey and see your earlier responses. Thanks to a bug report, we discovered that the reminder email sent people to a fresh survey.
1.0.314-production-March 28, 2021
- MINOR: In our analyses, we noticed that some participants are inclined to give overly optimistic responses to some questions. This may lead to biased and inflated results for some teams, and may take away a valuable opportunity for reflection. So we implemented a mechanism to reduce this bias in the reporting. For this, we used three items from the SDRS5 scale for social responsibility to calculate regression coefficients for all the individual items with a structural equation model in AMOS. We then use these coefficients to “partial out” the part of a score on each individual item that is linked to the score on social desirability for a respondent. We apply this correction only for participants who score one standard deviation above the population average for social responsibility, and the correction strengthens the larger the deviation is (to a maximum of -1.35 points).
1.0.310-production-March 25, 2021
- MAJOR: Teams can now specify the “benchmark” (for lack of a better word) they’d like to be compared with. Normally, teams are compared with a representative sample of other teams. But we’ve now added the option to select more specific ones, like “experienced teams”, “teams that ship fast” and “teams that deliver a lot of value”. This option is available for teams that have a boost code.
1.0.309-production-March 23, 2021
- MAJOR: Teams can now toggle between the range of scores in their team or the average scores (see below). This is a popular request, as it allows teams to see where they (dis)agree the most. This feature is available — with other benefits — when you purchase a boost code for your team for 1, 6 or 12 months (at €10/month) in our webshop. We hope that features like these can generate some revenue to fund further development of the survey.
- REFACTOR: Because we’re adding new features, and thus more complexity, we also removed some statistical calculations from the backend that we’re not using anyways.
1.0.297-production-March 18, 2021
- MINOR: We’ve began testing a model that might help us generate some revenue to fund further development of this platform. Teams can now purchase a “Boost Code” to gain free access to otherwise paid content (like some of the DIY workshops that are recommended in the profile) and other features (still in development). Aside from the benefits for teams, this is also a great way to fund the further development of this survey.
1.0.287-production-March 12, 2021
- REFACTOR: In order to optimize for performance, we used to store scores for respondents, samples and the entire population in the same MySQL database as the survey. However, as the number of teams increased, this made the database swell to hundreds of megabytes, slowed down performance and made it difficult to quickly backup. We moved this cache to a separate Redis store. Added benefit is that this also fixed a few performance-related bugs that sometimes caused a team to report one fewer respondent than actually participated.
- MINOR: We’ve changed the range for what constitutes an “average” score to the 15–85% percentile. It was 25%-75%. Now that we have a better sense of the spread scores in the population of Scrum teams, we feel that our initial range too easily resulted in either too positive, or too negative, feedback. We’ll continue monitoring the range.
1.0.284-production-March 8, 2021
- MAJOR: We simplified the profile by combining the tabs for “Recommendations” and “Our feedback” in the profile. Now, our feedback and improvements are bundled for each topic, which is far more user friendly.
- MINOR: An observant reader noticed that the feedback under “How to Improve” for “Ship It Fast” and “Build What Stakeholders Need” was the same. We fixed this by adding the correct copy.
- BUG: If you didn’t invite any stakeholders, the profile would still give you feedback based on the topics we measure for stakeholders. But with a score of 0 for those scales, this was rather pointless.
- MINOR: Under “How to improve”, the resources that we now offer are all based on actionable content (mostly DIY workshops).
- MINOR: Because the survey now recommends some paid Do-It-Yourself Workshops, we made it clear which content is free and for which we ask a small price.
1.0.278-production-January 24, 2021
- MINOR: We improved the error handling in the survey. In some rare situations, people got stuck when they didn’t enter a country and skipped all the way to the end of the survey. The validation message was only shown on the first page. Now it is shown on the final page too.
- MINOR: We improved the profile by now showing an informative icon when fewer than 3 stakeholders participate. In this case, no averages are shown for stakeholders. This was bit confusing for some, so we added an extra icon and message to explain why.
- MINOR: Based on feedback, we added the invitation link for stakeholders to the emails as well.
1.0.274-production-January 20, 2020
- MINOR: We added more feedback and recommendations where possible. We also adjusted two badges to include scores from stakeholders. First, there is “Unleash The Stakeholders”, which is acquired when many stakeholders participate in your team. Then there’s “Customer Love”, which you receive when stakeholders are happy with your work— lets see if you can get them :)
- MINOR: A loading icon is now shown in the profile when it is still being generated.
1.0.273-production-January 19, 2020
- MINOR: In your profile, you can now hide your personal scores. This was a recurring request from people who’d like to share the profile with their team on a shared screen without disclosing their personal answers.
- MINOR: The topic “Sprint Goals” is now under “Build What Stakeholders Need”, where it also loads the strongest in our statistical model.
- BUG: We discovered that “Psychological Safety” and “Cross-functionality” were accidentally hidden in the profile. Both are visible again.
- MINOR: We frequently regenerate all profiles based on new insights and an updated measurement model. In some cases, we include new topics that were not measured before. So we now add a message to your profile when we regenerate it and when it is older than 6 months to warn you that you may not be fully benefitting from updates to our model.
- MINOR: The homepage now shows the scores of stakeholders and teams. At the moment of writing, there are no stakeholders in the database yet. But we hope this will soon change.
1.0.268-production-January 18, 2020
- MAJOR: Today marks the release of a feature that has long been on our minds. In an effort to start more powerful conversations about things that matter, teams can now invite stakeholders to offer their perspective on what the team generates for them. This is a great way to validate the perspective from the team against those of actual stakeholders, like users and customers. Stakeholders participate with a shorter survey, and the results are compiled into the team’s profile (provided at least 2 stakeholders participate to protect anonymity). Because we want Scrum Teams to remain in control over who receives a profile, stakeholders do not receive a profile after completion. It is up to teams to decide what they want to share.
- MINOR: In your profile, you can now see how many people entered answers for each topic (people might skip some). Note that we only show the aggregated scores of others if more than 2 people participated.
- MINOR: Based on usage metrics, we’ve optimized the profile. Results are now shown first. Then, the team and stakeholders can be invited.
- MINOR: Several topics have been moved to other domains based on their factorial loadings from our ongoing psychometric validation. For example, “Quality” was moved from the domain “Ship It Fast” to “Continuous Improvement”. The reason for this is that Quality correlates much stronger with topics under “Continuous Improvement” than with topics under “Ship It Fast”. The net result of this change is that the average scores for the various domains are now even more accurate.
1.0.239-production-January 6, 2020
- MAJOR: Based on the data we’ve collected to date, we’ve re-analyzed and improved the survey to make it more reliable, shorter and more accurate. We were able to remove 43% of the questions without affecting reliability. The fit of our measurement model has improved significantly, and well beyond required thresholds (CFI=.967,Rmsea=0.03,GFI=.932). More importantly, we were now able to test our proposed four-factor model which seems to fit the data really well (and better than simpler or more complex models) — although more thorough analyses are necessary to validate these preliminary findings.
1.0.227-production-December 16, 2020
- IMPROVEMENT: We improved the user experience of the profile by hiding additional resources behind a click. Some users found the sheer number of helpful resources overwhelming.
- IMPROVEMENT: We implemented a very simple usage counter to count how many times certain features are used. This allows us to A/B-test new features and improvements. We don’t store any information about the visitor who used the action (like IP, user-agent).
- TECHNICAL: We moved all logic related to content (blogposts, podcasts) and their recommendations to a separate microservice that is easier to deploy, easier to test and simpler to modify. This is part of an on-going process to refactor a somewhat monolithic back-end API into smaller services that communicate through a message queue.
1.0.205-production-December 9, 2020
- IMPROVEMENT: We updated the homepage to make it more accessible to Scrum Teams from all sorts of organizations. From feedback, we learned that the “Zombie Scrum”-metaphor can be misunderstood or make people wonder about how objective the survey itself is (it is). Because we feel more strongly about helping Scrum Teams improve than the merits of the metaphor itself, we decided to de-emphasize “Zombie Scrum”.
1.0.201-production-December 4, 2020
- IMPROVEMENT: We vastly increased the relevance and usefulness of the resources that are shown in your profile. Instead of a manual selection — which always become stale quickly — we now draw from our growing catalogue of content.
1.0.194-production — November 26, 2020
- MAJOR: In the profile for teams, we split the recommendations we make across the “So What” and “Now What”-tabs. We hope this makes it easier to purposefully first make sense of the results, and then consider actionable improvements. While working on this, we also updated the texts to match the Scrum Guide 2020 (for the profile) and added new resources.
- BUG: Fixed a minor incorrect link for the various user groups we mention under ‘Find Help’. We want to make sure you find the right group :)
1.0.188-production — November 25, 2020
- MAJOR: With the Zombie Scrum Survey, we want to support teams all over the world in their continuous improvement loop. In this intermediate release, we took the first step towards this by greatly extending the profile you receive with additional recommendations and suggestions for how to make sense of the results with your team, identify next steps and evaluate your progress.
- IMPROVEMENT: We added some of the artwork that Thea Schukken created for our book to the website.
- BUG: Fixed a bug that caused the scale markers to sometimes pop over the dialog boxes.
1.0.182-production— November 24, 2020
- MINOR: In your profile, you can now set a reminder to retake the survey in the future. You can select how far into the future you want to receive this reminder. This is a great way to include the survey in your continuous improvement loop and determine to what extend your adaptations actually worked. At some point in the near future, we want to implement a feature to actually compare your scores over time. For now, you can do this manually by printing the profile for each take you do with your team.
- IMPROVEMENT: We renamed the ‘master’-branch to ‘production’.
1.0.170-master — November 13, 2020
- MINOR: Since we have published our book now, we updated the homepage to reflect this.
- IMPROVEMENT: We added significantly more feedback to the profiles, including more links to relevant materials. We also included references to specific experiments from our book, so that the survey now helps you to find the experiments that are likely to help you.
1.0.144-master —June 2, 2020
- MAJOR: We significantly overhauled the survey to improve the quality based on the data we’ve collected to date. This included the removal of items that (statistically) didn’t seem to matter to the bigger picture. We also included new items and scales that make sense from scientific literature. We have retroactively updated all previously generated profiles. Because profiles generated before the date of release don’t include the new items, some scales are missing in old reports. You can simply do a new run to get a complete profile based on the new version.
- IMPROVEMENT: We’ve added more new feedback rules that align with the findings of our research. So the new rules are less based on our opinion and experience, and more on the data we’ve collected to date.
- IMPROVEMENT: In what was a mix of a bug and an improvement, we made it easier for teams to get badges. In the previous iteration, a badge would only be awarded to a team if every participant got that badge. But that put the threshold way to high. So now, the majority of participants has to achieve a badge in order to get one as a team. We’ve updated all profiles.
- IMPROVEMENT: We’ve added the option to donate to some of the e-mails and when completing a survey. Developing this free tool costs us about 30 euro a month for hosting and roughly 1.500 euro a month to maintain and develop. So we hope you’re willing to support us.
- IMPROVEMENT: Uncompleted surveys are now removed if they remain untouched for 14 days after the last change. This saves storage.
- TECHNICAL: We’ve optimized performance of the application in various locations, mostly due to the large dataset that we now have.
- TECHNICAL: We’ve upgrade to .NET Core 3.0 and added health checks to the site and the services behind it.
- TECHNICAL: We migrated e-mail templates to SendGrid.
- TECHNICAL: Automated integration tests are now run on AppVeyor instead of through a complicated two-step process on Octopus Deploy and AppVeyor. The new process is much safer, more reliable and faster.
1.0.125-master — May 19, 2020
- IMPROVEMENT: We added a question at the start of the survey to check how a respondent will be participating. We’ve noticed that many give the survey a try with fake data first, before participating again with real data. When we know if this is the case, we can filter out these responses when doing analyses or when calculating population averages.
1.0.123-master — January 17, 2019
- IMPROVEMENT: In order to further protect privacy or participants, we removed Google Analytics from the website, as well as any third-party code that sets cookies or tracks users. Our website now sets one functional cookie when starting a survey, and that is only done to allow you to continue the survey later should you close your browser;
1.0.121-master — September 30, 2019
- BUG: A person who started a survey for a team discovered that notifications about new participants were not sent to him, but to the new participants instead. We fixed this bug;
- DISCOVERY: A new — and in hindsight obvious — feature was discovered as one user noticed how it would be awesome if the team report would only show the scores of the team and the total population, not also their own individual score. When discussing the results with the team, this keeps the focus on the scores of the team and not on the results of the person who printed his profile for the team;
1.0.119 — September 27, 2019
- BUG: In your profile, and when you have more people from your team participating, the results in the breakdown would sometimes show the wrong scores for the team (essentially repeating the same set of 5 scores);
- BUG: The profile would even show dimensions that you (or other members) did not answer any questions for, like ‘Valuable Outcomes’ which is only measured in the extended survey. Dimensions without any scores for you and your team are now hidden;
- BUG: The profile would fail to show any results for users who did not enter any responses on the survey themselves, but did invite a team to do so. We fixed this bug so that the profile now shows team scores;
- DISCOVERY: We discovered a use case where users start surveys for teams without filling in the initial survey. To support this use case, we would have to implement a feature where people can start surveys and invite people without actually going through the survey. Although this use case is understandable, we are hesitant to support it as we want to encourage teams to use this survey themselves, not because others want them to;
- IMPROVEMENT: Scores for teams are now based on medians instead of averages. Although averages are fine for larger groups, medians are more stable in smaller groups and/or with extreme scores. Averages are more sensitive for individuals with extreme scores. All existing profiles have been updated;
- IMPROVEMENT: We now show team scores when 3 or more people have participated. The initial threshold of 5 was a good start, but too high. We feel that anonymity is still protected when scores are only shown for 3 or more participants;
- TECHNICAL: We use a message queue for notifying services when new surveys are completed. One of these services accepted messages even though it failed to process them correctly. That service now throws an exception, leaving the message in the queue for later pickup;