Trust but verify: reimagining service assessments

@jukesie
Notbinary
Published in
4 min readApr 17, 2019

I find myself in an interesting position at the moment. A client has just significantly rebooted their portfolio of DDaT activity — leading to a big shake-up of their governance and that opened up an interesting opportunity — they needed an approach to strengthen the Verify aspect of Trust/Verify for digital deliveries and I saw a chance to reimagine the idea of service assessments locally.

This client is Government-adjacent and has close ties to the wider digital gov community but it is exempt from GDS controls — as such it has somewhat gone its own way in its evolution towards becoming an internet-era organisation. While this does create some challenges at times it also opens up these golden opportunities as well.

I am on the record as believing that GDS Service Standard Assessments are a ‘necessary evil’ — in fact I was quoted in this brilliant presentation by Sam, Steve and Clara at Service Design in Government.

Despite some recent struggles with the process that point of view remains solid — assessments — alongside spend controls, the service manual and the design principles — are the most important legacy of GDS.

That doesn’t mean I wouldn’t do it differently though :)

There are lots of things I love about assessments but let me list some of the things that have always bothered me — the things I’d like to avoid in any kind of reimagining:

  • They reinforce the unfortunate waterfall-esque linear nature of the Discovery > Alpha > Beta > Live model (hell they formalise the addition of Private & Public Beta) with the reality that funding decisions rest of results…which leads to..
  • Despite all the efforts to avoid it they are just too confrontational — it can feel a bit like you are joint defendants at a trial on some US legal show — with a team of prosecutors trying to trip you up (I don’t think anybody intends it to be like this — it is just the reality!) Because…
  • People. There is a LOT of room for personal interpretation in the process — and people have their own bugbears and agendas — that is just human nature. It is unreasonable to expect that not to be a factor.
  • The whole approach favours story-tellers. The assessments are a performance — including rehearsals and props — and the better the MC the better the chance of success.

For balance here are some things I love (and yes some of these seem to contradict what I wrote above — I am a complicated man)

  • The Service Standard — a common, sensible, evidenced, open set of guidelines that just make sense for the delivery of better services. The fact that this artefact underpins every assessment is brilliant.
  • The fact that there are consequences — assessments have teeth. The stick is what makes people take the Service Standard seriously by verifying people have followed the approach (spend controls enforce do the right thing — assessments enforce do the thing right)
  • As I said above people might be a problem but also peer review is incredibly powerful. Independent peer review even more so. There is so much opportunity for learning when you are able to have an open and frank conversation with people with the knowledge to ask hard — but sensible — questions.
  • The very existence of the assessments is a lever for teams to push to be given the space and the support to do things right. It short-circuits a certain amount of internal politics and pressure plays from senior people (not all of it — but it helps!)

So what would something that built on the strengths but avoided the weaknesses look like for me?

  • Less rigid milestones maybe? I think the reality is continuous assessments would be best but are simply not sustainable and god knows the Ofsted model of drop-in inspections isn’t the way forward! Maybe a more ad-hoc, lighter touch set of meetings throughout the lifecycle? With a single go/no-go at a mid point agreed between all parties and then another before anything goes Live?

As a side note I think the Discovery > Alpha > Beta > Live model was incredibly important when we started using it but the language and the interpretation feel like a but of a milestone these days — it was never perceived as such a linear flow but that is what it has become.

  • They still need to have plenty of stick to go with whatever carrots are built in. Assessments should be able to send teams back to the drawing board and in extreme cases just stop them dead. That means assessors need to be trusted and empowered by the powers that be.
  • Like I said earlier I think independent peer review is something special. I’m pondering how you do that though if you are not part of a national network of teams following the same standards coordinated by a central body. Tricky. I think it is worth investigating though. It might be a case of a quid pro quo arrangement with other similar organisations.
  • Which would mean whatever local standards — and there will 100% need to be agreed and well articulated local standards — would need to map to either the Service Standard itself or a.n.other broadly understood standards — else how can anyone from outside the host organisation be expected to really assess the work.

Anyway it is an interesting thought experiment — and as usual this blogpost is mainly me making a bit of sense of all the things rattling around in my head!

Again to reiterate — I believe Service Standard Assessments are a GOOD THING.

I am just wondering if working at a smaller scale there is more room to iterate.

It’s Notbinary.

:)

--

--

@jukesie
Notbinary

Applying the culture, practices, processes & technologies of the Internet-era to respond to people’s raised expectations…as a service :) notbinary.co.uk