Part 2a: Maturity Model Update: Facilitating Learning Across and Within Teams (2019)

David Eaves
Project on Digital Era Government
7 min readJun 8, 2020

By David Eaves with Lauren Lombardo

In 2018, working Ben McGuire, a student here at Harvard Kennedy School, we released a maturity model designed to contextualize the progress of public-sector digital service units. The model, based on feedback from participants at our annual digital services convening and conversations from practitioners from around the world, attempts to enable people to assess a government’s capability to deliver digital services by looking at progress across six capacities: political environment, institutional capacity, delivery capability, skills and hiring, user centered design, and cross-government platforms.

Two exciting things have happened since the model was released. First, we’ve learned a lot about what makes the maturity model interesting and useful to practitioners and students. Our goal for creating the model was to help digital service teams talk and learn from one another. So we sought to spot patterns of capabilities that digital-services teams doing similar work have had to develop over time. This has been useful for several reasons. First, it provided an imperfect but helpful roadmap teams can use to see how their work will evolve. Second, it has proven to be helpful in getting teams to debate strategy and theory of change. Digital service teams around the world have different goals and divergent paths for achieving them. This tool can give us a window into their actions and infer strategy. Finally — and perhaps most important — it provides a shared vocabulary that helps spark helpful conversations within and across teams.

The second exciting outcome has been that individuals from digital-services teams around the world completed our online assessment tool, providing us with self-reported benchmark data. Throughout the year, when speaking about the model at conferences or using it in my degree program and executive education courses, I’ve been able to gather even more data from representatives of the digital-services teams in attendance. As a result we have a healthy amount of self-assessed benchmark data from digital service teams around the world.

In this chapter we share this data and some hypotheses we’ve generated about what it tells us, and also outline how the maturity model has been used to elicit conversation within and across digital-services teams in ways that they have found useful.

Understanding the Data

Self-Reported Strengths and Weaknesses of Digital-Services Teams, Reported on a Scale of 0 to 3, with 3 Being the Robust development. Image by Ben McGuire.

The data show that on average, digital-services teams have spent most of their time building political capital, developing executive sponsors, and building out their mission statement. Strikingly, almost half of the teams reported a “high” or “future state” ability in these three areas.

This isn’t to say that teams aren’t focusing on other goals as well. Respondents reported progress across the board, and the individual data we collected suggest a diversity of approaches. In eleven other areas, more than 20 percent of teams report “high” or “future state” abilities. These results span a wide range of topics and include creating public registrars of data and building product-management competencies.

While we see that teams have made progress elsewhere, their accomplishments in other categories don’t compare to the levels reported for building political capital, developing executive sponsors, and building out a mission statement.

It is nice that divergent approaches are forming, but regardless of where else teams have prioritized, almost everyone has included these three areas in their action plan. This doesn’t come as much of a surprise. As more and more digital-services teams are created, a plausible but somewhat optimistic hypothesis is that they are now growing the way normal government organizations do, by building out the fundamentals.

The data suggests that digital-services teams have prioritized creating a solid foundation for the work ahead. And by focusing on these three areas they have cultivated the political and structural capital they’ll need to support future projects. This foundation is important, and it was undoubtedly very difficult to gather the support needed to report this level of success. By any measure, this work should be considered an accomplishment.

However, the emphasis on building capacity also shows that for many teams, what is far and away the most difficult work is yet to be done. Some of the most basic goals, such as prioritizing user needs and improving the user-feedback cycle, sit relatively low, with 14 percent in the “high” range and 17 percent in the “future state” range. This is to say nothing of more ambitious and demanding benchmarks, such as creating shared platforms (and their governance), which has the lowest amount of respondents — just 8 percent — showing significant progress.

It’s clear from reviewing the categories with lower scores that the next battles will be hard won. There are also risks; for example, the teams with the strongest foundations and most robust political support may be pressured to show value quickly. The data shows that on the whole, teams have yet to spend the same amount of energy on figuring out how they will move digital-government initiatives forward. And so the next few years will be pivotal to gaining credibility and trust. Either way, political sponsors will expect to see high returns on their investment, such as tangibly improved online services rather than simply better access to tools, and high-level guidelines for other departments to follow.

Theory of Change

Again, though the aggregated data shows this trend, it doesn’t mean that teams aren’t developing their own theories of change. We don’t see the same number of teams reporting “future state” capabilities in some of the more challenging areas, but they are starting to make the investment. So while it might be concerning that every team seems to be focused on the same three areas even though it’s unlikely they all need to be, these concerns are offset by the fact that they are also focusing on fourth or fifth areas outside of the general consensus.

This is encouraging. We are still early enough in the game of digital transformation in government to say that there is no clear and dominant strategy, so experimentation and divergence are probably our allies at the moment. And while political and structural capital may be a necessary precondition for all other activities, it is not an end in itself.

By focusing only on building capacity, teams would be forgoing approaches we have seen in other jurisdictions, including starting with existing platforms, as India and Bangladesh have done, and prioritizing user perspective and design patterns, as in the U.K. And while starting with the fundamentals might be the best approach for some teams (Italy), there are many ways to build a digital-services team, and each team’s theory of change should be entirely based on its own government’s needs and structure.

Many budding digital-services teams were inspired by the success of the U.K.’s Government Digital Service (GDS). And while many governments may be hoping to emulate the results the GDS has achieved, the goal must remain digital transformation and not building a team. Often these are confused. GDS’s success was built on its ability to understand how to have an impact within its environment by assessing and developing an effective theory of change.

What’s more, no team, no matter how mature , has built deep expertise across the board. Attempting to do so may be a trap in that every team has limited capacity and resources. Figuring out how to concentrate on and grow those in a way that will maximize impact within your environment — the way GDS did in its early days — is the key. For example, when I teach the maturity model at the Harvard Kennedy School, I use a case study about GDS. Students are given a summary of key facts about where GDS was right after the release of GOV.UK and asked to both assess GDS against the maturity model and strategize about what GDS’s next steps should be. They are frequently surprised by how low the U.K. team ranked in many of the maturity-model categories while they were working on GOV.UK. But the GDS team’s success didn’t come from scoring high across the board or laying the best foundation; it came from understanding where they could solve a real problem in a way that showed how valuable GDS would be.

As teams continue to grow, it’s crucial that the lessons they take away from GDS aren’t about how to build a team, but about how to reflect on their individual priorities and figure out how to make an impact within their organization’s current state. Teams must take the time to figure out their own theory of change and resist the urge to pattern-match. And while we see that many more teams have focused on political capital, it’s encouraging that a number of teams are attempting to chart their own path and figure out which of the many diverse approaches will be right for them.

Want to see where your team lands in the maturity model or do a team wide self-assessment exercise? Check out the exercises below or fill out the Maturity Diagnostic online. You can download a PDF version of the maturity model here.

--

--

David Eaves
Project on Digital Era Government

Associate Prof at the Institute for Innovation & Public Purpose, UCL. Work on digital era public infrastructure, transformation & public servants competencies.