We need to talk about…Service Assessments

@jukesie
4 min readDec 17, 2015

--

Pretty much everybody involved in UK [central] Government digital work these days knows about the GDS Service Assessments. Created to provide a level of peer review and assurance around compliance with the Digital by Default Service Standard they were envisioned as a way of building trust in the previously utterly untrusted — Government technology project delivery. Almost a ‘kitemark’ for digital transformation.

Given the scale of the transformation that has been undertaken and the widely held lack of confidence initially in the Civil Service to be able make good on the mission the Assessments have played an important role in raising standards and capability. Not to put too fine a point on it they have been a necessary evil.

There are three things though that concern me a little about the Assessments as they stand.

1.

It feels like they have been endowed with so much additional significance that rather than be a forum to discuss the finer points of digital delivery they are instead a hotbed of crippling anxiety and second guessing.

The problem is they manner in which passing the assessments has essentially replaced (or added to) milestones like Gateway reviews that are tied to things like the release of funding or permissions to continue (or launch). This means that the pressure to ‘pass’ increases hugely and it could lead to a situation where being ready for the Assessment becomes the end game rather than following the spirit of the Standards to build a better service.

I don’t believe this is really the fault of the format (though it could be a little less adversarial I think) and is absolutely not the fault as the assessment teams — on both occasions I’ve been involved at GDS they have been brilliant — helpful, open and with intelligent probes into our work beyond the scripted. It is simply the nature of a need for formal points of assurance at the governance level that don’t always chime with what is the best way of identify whether a team is performing well.

2.

The way the Assessments are designed at the moment put an enormous amount of pressure and responsibility in to the role of the Service Manager. While it is recommended that on the day the Service Manager is backed up by at least a Tech Lead and User Researcher the vast majority of the questions are at least initially directed at them.

Given the scope of some projects and the complexity of the modern web I wonder if this is really realistic — I’m pretty experienced and have quite a wide breadth of digital interests (plus am a bit of a control freak) but I really struggle to have a full perspective of everything I manage without stepping so far back you lose all the useful detail.

It is a bit of a Catch-22 as I do agree in the importance of these (relatively new style roles) but my feeling is it creates a bit of a single point of failure — one person having a bad day could cause problems for a massive project.

3.

The Standard and thus the Assessments haven’t really evolved beyond their initial mission of assessing the big, exemplar transaction services. These are still clearly the priority and will be for some time yet but it does feel like we need a slightly more openly, modular approach. Not everything that needs assessing fits in to that original fully end-to-end transaction model.

Given all the discussion of Government as a Platform maybe we need a way to approach emerging platforms rather than services. What about all the digital products we are building that contribute to those services and platforms?

I’m pretty sure this is on the to-do list for somebody at Aviation House but if the situation outlined in (1) persists we need to make sure that we give everyone a fair shot rather than trying to crowbar their service/platform/product into a shape that fits the current questions.

Now I acted as Service Manager yesterday at our Beta → Live assessment and at the moment I don’t know whether we passed or failed. I don’t want this to be seen as sour grapes or a reflection on that experience — the assessment team were fab, we had a lot of useful conversations, answered some tough questions and all in all I was happy with the experience it is just I have been thinking a LOT about the process these last few weeks and this window between doing and knowing seemed a good time to publish.

--

--

@jukesie

Applying the culture, practices, processes & technologies of the Internet-era to respond to people’s raised expectations…as a service :) notbinary.co.uk