Digital Service Units Maturity Model: Facilitating Learning Across and Within Teams

Lauren Lombardo
10 min readAug 11, 2020

--

By David Eaves with Lauren Lombardo

This is a cross-posting of an article that I co-wrote for the Digital HKS 2019 State of Digital Transformation report.

In 2018, working Ben McGuire, a student here at Harvard Kennedy School, we released a maturity model designed to contextualize the progress of public-sector digital service units. The model, based on feedback from participants at our annual digital services convening and conversations from practitioners from around the world, attempts to enable people to assess a government’s capability to deliver digital services by looking at progress across six capacities: political environment, institutional capacity, delivery capability, skills and hiring, user centered design, and cross-government platforms.

Two exciting things have happened since the model was released. First, we’ve learned a lot about what makes the maturity model interesting and useful to practitioners and students. Our goal for creating the model was to help digital service teams talk and learn from one another. So we sought to spot patterns of capabilities that digital-services teams doing similar work have had to develop over time. This has been useful for several reasons. First, it provided an imperfect but helpful roadmap teams can use to see how their work will evolve. Second, it has proven to be helpful in getting teams to debate strategy and theory of change. Digital service teams around the world have different goals and divergent paths for achieving them. This tool can give us a window into their actions and infer strategy. Finally — and perhaps most important — it provides a shared vocabulary that helps spark helpful conversations within and across teams.

The second exciting outcome has been that individuals from digital-services teams around the world completed our online assessment tool, providing us with self-reported benchmark data. Throughout the year, when speaking about the model at conferences or using it in my degree program and executive education courses, I’ve been able to gather even more data from representatives of the digital-services teams in attendance. As a result we have a healthy amount of self-assessed benchmark data from digital service teams around the world.

In this chapter we share this data and some hypotheses we’ve generated about what it tells us, and also outline how the maturity model has been used to elicit conversation within and across digital-services teams in ways that they have found useful.

Understanding the Data

Self-Reported Strengths and Weaknesses of Digital-Services Teams, Reported on a Scale of 0 to 3, with 3 Being the Robust development. Image by Ben McGuire.

The data show that on average, digital-services teams have spent most of their time building political capital, developing executive sponsors, and building out their mission statement. Strikingly, almost half of the teams reported a “high” or “future state” ability in these three areas.

This isn’t to say that teams aren’t focusing on other goals as well. Respondents reported progress across the board, and the individual data we collected suggest a diversity of approaches. In eleven other areas, more than 20 percent of teams report “high” or “future state” abilities. These results span a wide range of topics and include creating public registrars of data and building product-management competencies.

While we see that teams have made progress elsewhere, their accomplishments in other categories don’t compare to the levels reported for building political capital, developing executive sponsors, and building out a mission statement.

It is nice that divergent approaches are forming, but regardless of where else teams have prioritized, almost everyone has included these three areas in their action plan. This doesn’t come as much of a surprise. As more and more digital-services teams are created, a plausible but somewhat optimistic hypothesis is that they are now growing the way normal government organizations do, by building out the fundamentals.

The data suggests that digital-services teams have prioritized creating a solid foundation for the work ahead. And by focusing on these three areas they have cultivated the political and structural capital they’ll need to support future projects. This foundation is important, and it was undoubtedly very difficult to gather the support needed to report this level of success. By any measure, this work should be considered an accomplishment.

However, the emphasis on building capacity also shows that for many teams, what is far and away the most difficult work is yet to be done. Some of the most basic goals, such as prioritizing user needs and improving the user-feedback cycle, sit relatively low, with 14 percent in the “high” range and 17 percent in the “future state” range. This is to say nothing of more ambitious and demanding benchmarks, such as creating shared platforms (and their governance), which has the lowest amount of respondents — just 8 percent — showing significant progress.

It’s clear from reviewing the categories with lower scores that the next battles will be hard won. There are also risks; for example, the teams with the strongest foundations and most robust political support may be pressured to show value quickly. The data shows that on the whole, teams have yet to spend the same amount of energy on figuring out how they will move digital-government initiatives forward. And so the next few years will be pivotal to gaining credibility and trust. Either way, political sponsors will expect to see high returns on their investment, such as tangibly improved online services rather than simply better access to tools, and high-level guidelines for other departments to follow.

Theory of Change

Again, though the aggregated data shows this trend, it doesn’t mean that teams aren’t developing their own theories of change. We don’t see the same number of teams reporting “future state” capabilities in some of the more challenging areas, but they are starting to make the investment. So while it might be concerning that every team seems to be focused on the same three areas even though it’s unlikely they all need to be, these concerns are offset by the fact that they are also focusing on fourth or fifth areas outside of the general consensus.

This is encouraging. We are still early enough in the game of digital transformation in government to say that there is no clear and dominant strategy, so experimentation and divergence are probably our allies at the moment. And while political and structural capital may be a necessary precondition for all other activities, it is not an end in itself.

By focusing only on building capacity, teams would be forgoing approaches we have seen in other jurisdictions, including starting with existing platforms, as India and Bangladesh have done, and prioritizing user perspective and design patterns, as in the U.K. And while starting with the fundamentals might be the best approach for some teams (Italy), there are many ways to build a digital-services team, and each team’s theory of change should be entirely based on its own government’s needs and structure.

Many budding digital-services teams were inspired by the success of the U.K.’s Government Digital Service (GDS). And while many governments may be hoping to emulate the results the GDS has achieved, the goal must remain digital transformation and not building a team. Often these are confused. GDS’s success was built on its ability to understand how to have an impact within its environment by assessing and developing an effective theory of change.

What’s more, no team, no matter how mature , has built deep expertise across the board. Attempting to do so may be a trap in that every team has limited capacity and resources. Figuring out how to concentrate on and grow those in a way that will maximize impact within your environment — the way GDS did in its early days — is the key. For example, when I teach the maturity model at the Harvard Kennedy School, I use a case study about GDS. Students are given a summary of key facts about where GDS was right after the release of GOV.UK and asked to both assess GDS against the maturity model and strategize about what GDS’s next steps should be. They are frequently surprised by how low the U.K. team ranked in many of the maturity-model categories while they were working on GOV.UK. But the GDS team’s success didn’t come from scoring high across the board or laying the best foundation; it came from understanding where they could solve a real problem in a way that showed how valuable GDS would be.

As teams continue to grow, it’s crucial that the lessons they take away from GDS aren’t about how to build a team, but about how to reflect on their individual priorities and figure out how to make an impact within their organization’s current state. Teams must take the time to figure out their own theory of change and resist the urge to pattern-match. And while we see that many more teams have focused on political capital, it’s encouraging that a number of teams are attempting to chart their own path and figure out which of the many diverse approaches will be right for them.

Want to see where your team lands in the maturity model or do a team wide self-assessment exercise? Check out the exercises below or fill out the Maturity Diagnostic online.

Digital Maturity Self Assessment Exercise

This tool is designed to provoke conversations about capabilities and theories of change among digital service team members. Below are instructions to an exercise we’ve conducted with about 30 digital service teams around the world.

The What and Why of the Digital-Services Maturity Model

In 2018 we released a maturity model designed to help public-sector digital-services units benchmark their capabilities. The model is designed to help digital-services teams talk and learn from one another. Outlined below is a simple exercise I’ve used with dozens of digital service teams around the world. It has led to helpful and important conversations. I’ve designed the exercises so that you can do them independently, but I’d be glad to provide support if you need it. The exercises are meant to enable teams to:

  • Align on current capabilities. All too often, teams aren’t aware of their own capabilities and resources. This can be caused by team growth and specialization, and teams that underestimate their capabilities can be overcautious, while those that overestimate them may overreach and put themselves at risk.
  • Align on strategy and theory of change. Another risk is that teams won’t be aligned on their goals and thus on the capabilities they need to develop to make progress. Teams will confront myriad problems and it’s tempting to engage on each one, but that can pull the team in multiple directions. Having the organization aligned around the goal, theory of change, and strategy is critical given how poorly resourced most digital-services teams are. It’s crucial to make sure everyone is rowing in the same direction.
  • Facilitate the sharing of lessons learned. By providing a common language and framework, this exercise helps teams identify capabilities that they have not developed but other teams have, assess whether those capabilities are useful, and inquire about how to build those capabilities.

Exercises

Before getting started, print out the maturity model for everyone on your team. You can find a copy at http:// bit.ly/DGMModel.

Download at http:// bit.ly/DGMModel

Exercise 1: Gaining Alignment around Capabilities

  1. Determine where your team falls within each area by asking each member to circle one box in each row, as shown, corresponding to where they think the team is at.
Circle what level of capability you believe your team has

2. Review everyone’s responses as a group to help align your team. Later, you can aggregate this information to get a sense of where the majority of people think you are performing high or low.

Exercise 2: Gaining Alignment around Theory of Change

1. Tell participants that you are giving them 10 prioritization points. I sometimes refer to these points as “Mike Bracken points,” in honor of the UK’s former chief digital officer and co-convener of our event.

  • These points represent where participants think the organization as a whole should concentrate its efforts in building new capabilities. They represent investment, or where team leaders should allocate their attention.
  • For the sake of clarity I sometimes ask that participants think about how they would prioritize their efforts over the next 6 months, 1 year, or 2 years. The time period is up to your discretion.

2. Ask participants to assign these points to where they believe the organization should invest more time building out the capabilities of that row.

  • Points must be assigned to a row, as shown. Do not assign points to a specific box.
  • Points can be assigned in increments of 1 to 10, and can be assigned in any configuration as long as the total number of points per row does not exceed 10. Thus, for example, one could assign 2 or 5 or all 10 points to any given row.
Write in the “points” (up to 10) to the left of a column.

3. Review how everyone assigned their points and assess the areas the team thinks are most important to prioritize

Note for the facilitator: During the debriefing, many participants will spread their points across 7 to 10 rows with 1 or 2 points each. But we highly recommend that 3 or 4 points each be assigned instead to the 2 or 3 rows that will provide the most leverage. Assigning your points across more rows increases the likelihood you’ll spread your energy too thin and do everything poorly.

--

--

Lauren Lombardo

Let’s leverage data and technology to make society and government work better for everyone.