Are we really this stupid?
For the past year, GDS has been under serious attack. Others have written what a folly it would be to break apart or diminish the role of GDS. I agree. Government would be poorer, users would lose their champion and the corporatist vendor-bureaucrat axis would win.
And now we have the prospect of spend controls — an essential tool in getting out of the mess we were in — being watered down.
Let me explain.
Any business embarking on digital transformation should study the two sides of digital government in the last Parliament.
The talented digital teams now redesigning the welfare state and the justice system for the Internet age wouldn’t have got anywhere without Liam and his team creating the right environment for this kind of transformation to succeed.
This meant getting a proper grip on government IT for the first time to stop the bleeding. It worked.
We started to fix legacy systems rather than buying more band-aids.
We started saying ‘no’ to the small oligopoly of big systems integrators that dominated government IT.
We made it easier for a new breed of supplier to work with us.
We stopped pretending government was special and started to use open standards, open source and cloud computing like everyone else.
We tackled the friction in things like procurement, information security and funding.
Because the truth is you can’t do proper transformation without also doing all of this. If you try to, you’ll just end up with some shiny apps on top of a mountain of expensive, messy crap.
You’ll have made things more digital, but probably not any better.
In praise of… spend controls
I spent 18 months at GDS helping with some of this stuff.
You’ve probably never heard of the team I worked in. The public face of GDS was GOV.UK, exemplars, G-Cloud and government-as-a-platform. Delivering services, making things better for users and fixing government from the inside. Big, important stuff.
Within departments, however, GDS is known for another thing — spend control.
The spend control has been around since 2010. Officially its there to ensure value for money — a delegated authority from Treasury to stop government overspend on IT. The spend control team review all spend over a certain threshold to ensure it is in line with the technology code of practice.
In practice, my job was to stop really bad things from happening.
The spend control is a blunt tool, but it is effective. It has saved taxpayers 100s of millions of pounds that would otherwise be wasted on moribund projects, badly designed contracts or services that don’t meet user needs.
That small team of around 8 were the first responders of GDS, sent out to sniff out things going badly and help make things better.
We weren’t always popular — saying ‘no’ usually isn’t. We weren’t always right either — working across government requires pragmatism and understanding, not paternalism.
Nor do we always have to say ‘no’. Nowadays, it is the exception rather than the norm. The worst practices of old Big IT are no longer accepted in polite society (in central government at least). There’s now lots of really good things happening and some really talented digital people across government.
We’re still bleeding
But despite the progress over the past five years — whisper this quietly — there’s still quite a lot of bad things going on.
You’d be forgiven for thinking otherwise, but it’s not all service design, user needs and bunting in the world of digital government.
The idea that spend controls can now be relaxed is worrying. To anyone who takes a passing glance at headlines in the tech press, it’s laughable to think that departments — even the big ones — are now capable of avoiding the mistakes of the past.
Big departments are sprawling empires of agencies, intra-departmental turf battles and odd teams in obscure buildings that you forget exist until they sign a 10 year extension deal with their incumbent supplier.
Even for a super C/T/D/IO with a brilliant team supporting them, turning round the oil tanker whilst designing new digital services without getting shafted by suppliers or the mandarins is a mammoth task.
Government isn’t there yet. It may never get there. No matter how well intentioned, when it comes to IT government has an awful habit of messing things up.
Liam had a word for these bad things — he called it self-harm. It’s a neat term to describe the self-inflicted digital mess government tends to get itself into. Simon Wardley talks about inertia and bias. Same idea.
Whatever you call it, in my time in government (and healthcare) I’ve seen an awful lot of it. This stuff isn’t limited to government, either — not by any stretch. Big private sector organisations are riddled with these problems, too.
Read between the lines of official government digital standards and policies, and you may be able to make out the horrors that led to their creation. But it’s rare that the acts of self-harm themselves are called out explicitly.
So, I’ve got a little list.
Consider this the opposite of a playbook. These are the things you really should avoid doing. Anti-patterns, if you will.
This stuff is still going on, every day, across government.
I’ve not written this to call anyone out. There are many smart, dedicated people doing these things often because there’s no alternative within the constraints they’re working in.
But it’s 2016. We really need to stop doing this to ourselves…
Hat tips are due to: Simon Wardley, Leisa Reichelt, Mark Thompson, Liam Maxwell, Jerry Fishenden, Tom Loosemore, Andrew Greenway, Paul Downey, Martin Fowler and lots of awesome government digital people for many of the hard-learned lessons below.
Have I missed any obvious anti-patterns? Have I made any clangers? Please let me know! @sheldonline
The Playbook: A to Z
Agile Backwards causality Big Bang Business Buy vs. build Cloudwash Core competency COTS DevOps Enterprise Enterprise architecture Exit Extension Lift-and-shift Market Nominative determinism One-throat-to-choke Platforms Portals Roadmap Rules engine Steady state Target Operating Model Towers Turn-key solution User research
As in: Agile with a capital A As in: PRINCE2 Agile (actually a thing!) As in: Enterprise Agile
We’ve all seen it. You can smell it when the first question is ‘which Agile methodology do you use — Scrum or DSDM?’.
It’s the kind of Agile that focuses on the ceremonies like standups, but misses the point entirely. There’s lots of post-its and Things on Walls, but it doesn’t look like anyone’s actually used it to manage their work. Is it mostly for show?
We cringe. It’s like dad dancing.
But it’s worse than that. This is kind of Agile that gives agile a bad name.
It’s the kind of Agile that just slices up a Gantt chart into fortnights and calls them sprints.
It’s the kind of Agile that gives lip service to the principle of responding to change over following a plan, but forgets to mention that you can’t change the project scope, schedule or budget. Sorry about that.
It’s easy to mock. But this kind of Agile is a well intended — but flawed — attempt to bridge the gap between the way we want to deliver and way we are forced to in the waterfall reality of government.
There’s a serious challenge in here somewhere: how can we deliver in an agile way within institutions that are most definitely not agile?
Government has a rhythm of Parliamentary terms, spending rounds and news cycles. Ministers like to announce things, and increasingly these things rely on digital to make them work.
Appeals to trade-offs in scope, schedule or budget don’t usually carry much muster in the face of The Grid and the wishes of the minister. That’s democracy, no use arguing against it.
The policy class also has a strong attachment to certainty. The Right Answer is determined, policies are written and then implemented. “We don’t know” or “we changed our minds” are signs of weakness.
Embarking on a project without a pre-determined solution — or being willing to change course when something isn’t working — is something agile is well-suited to and government is usually very bad at.
There’s also a need to account up-front for how public money is spent. Government does this through the Green Book business case process. This process is designed to de-risk. To ensure government makes the right investment decisions. It seems right and proper to do this, and there’s decades of experience baked into the way government does business cases.
However, the rules on business cases — and particularly the way those rules are implemented by departments — can kill an agile project before it has started.
Firstly, the business case process is designed to reduce risk by up-front assessment of the merits of an investment. But despite the apparent sophistication, this is all just guesswork. As Andrew wrote:
at their heart, business cases are crystal ball gazing with Excel tables. Experience has shown that there is no better way to guarantee poor predictions than to use individual experts to make assumptions about the future, and sprinkle on a little cognitive bias. Yet this is exactly how we still cost up all the biggest public sector investments. We are trapped in what Nassim Nicholas Taleb calls an “Intellectual Yet Idiot”-led process.
Furthermore, the traditional SOC/OBC/FBC process is designed around big, traditional OJEU procurement processes. We write down our requirements up front, run a tender and sign a waterfall contract. Discovery and iteration — if it happens at all — is usually confined to the ‘SOC’ stage.
And given how long this process usually takes — along with all the other slow-moving procurement, governance and approvals processes — who the hell would want to do this more than once? There’s a perverse incentive built into the system that forces projects to go for one, big bite of the cherry, which in turn has a habit of creating big, waterfall programmes.
In short, the way government funds projects too often forces us to make big decisions at the point at which we know least.
Cause for optimism: things are changing. HMT is coming to terms with agile. Policy teams are testing what works. Officials are learning to make lots of small, less risky, decisions through gated funding approvals rather than one big up-front one.
Footnote: agile isn’t right for everything. Go read Simon Wardley.
As in: “they [successful organisation] are doing this, so we should too” As in: “let’s adopt the Spotify model!”
This is the difference between having strategy and have a thing called a Strategy.
You can now auto-generate your own digital strategy based on the same set of buzzwords everyone thinks they should use. It probably won’t look too different to the strategies of most organisations.
If you are dependent on copying — wholesale — the latest digital-this or data-that other organisations are promoting; if you follow the latest trends sheep-like with no consideration of needs or your operating environment then you are doomed.
Pity the poor digital bods in government who have to cater to the latest whims of their leadership, who’ve read that such-and-such are doing this Big Data thing and want to know what we’re doing about it.
Discoveries are commissioned. Strategies are written. Things are bought. Money wasted. Meanwhile, things that actually merit funding and focus — things that would actually make things better for users — remain at the back of the queue.
As in: “No sorry, we have no ability to roll back to the old system” As in: “Beta? What’s a beta?”
Yeah, this is still going on. One day everyone will be using one system, the next day it will be replaced in its entirety with something new. No phased adoption, no parallel running. Everything has changed.
No prizes for guessing what happens next.
But why are we still doing this?
The worst reason for choosing a big bang approach is financial.
“We can’t afford to keep up two systems at the same time” You really can’t afford not to.
Another bad reason is commercial.
“Our [super expensive] contract has already been extended multiple times… we need to switch off the legacy as soon as the new system is ready” Well, you should have started earlier then!
The are more understandable reasons for a big bang approach. Running two systems in parallel comes with complexities: keeping data in sync, juggling two systems. Switching overnight can seem appealing.
When a system is embedded across a business, it can become even more complex. It is quite typical to see huge, hospital-wide EHR systems switched overnight. Tempting as it might be to transition gradually, department-by-department, the pain of having different bits of the organisation on different systems is seen as too high a price.
Even harder still is when your system has had hundreds of holes punched in it over the years to integrate with lots of other systems. The thought — and cost — of managing all those dependencies across two different systems at the same time can make it seem like there is no alternative to a big bang approach.
But there’s always an alternative. Find a way. Evolve your architecture instead of replacing it all at once. Your proprietary technology stack and rigid contracts will make this hard, but find a way. The alternative is far worse.
As in: ‘The Business’ vs ‘IT’ As in: “Let’s ask The Business what their Requirements are”
This is an oldie that refuses to die.
It’s a world of mutual distrust where IT are seen as the basement boys who inflict pain and disruption on staff and ‘The Business’ regarded as clueless and run by idiots.
It’s a world where The Business sees IT merely as a support function, there to do what The Business tells it to and IT sit back, resigned to playing a bit-part in the future of the organisation.
It’s a world where IT are at least one removed from understanding the needs of the most important people — the end users/customers/patients/taxpayers/residents (delete as appropriate).
There is an alternative:
IT delivery should be far more distributed across organisations. A larger number of smaller, more embedded and more focused business IT teams offers many advantages.
Embedded, multi-discipline teams for the win!
Buy vs. build
As in: “Why would we build software when we can buy off the shelf”
Buy advocates accuse builders of ‘Not invented here’ syndrome, or of reinventing the wheel.
Build advocates accuse buyers of overlooking the hidden costs of buying software.
Truth is buyers are often people who like to buy things, and usually don’t know enough about building and managing software to consider alternatives.
And builders like building stuff. Hammers, nails, go figure.
It’s a false choice.
Firstly, it’s the wrong question. A better question would be grow your own vs. re-use, but that’s not as catchy. Because in all of this the licensing model doesn’t really matter.
The need for ‘enterprise level support’ isn’t an excuse to pick proprietary over open source anymore. Canonical, Pivotal and co. are not bedroom enthusiasts, they’re serious players.
Second, nobody actually builds from scratch nowadays. Lots of problems in software engineering have already been solved, packaged up and put on Github for everyone to steal.
From database tools, to operating systems, application frameworks and development toolchains — making software is often as much package management as coding nowadays.
As the recent NPM drama showed, developers are now relying on others to provide even the simplest of software components and cloud services. The art is in bringing all of these smaller components together with your own code to design coherent services.
Finally, nothing actually works ‘off-the-shelf’. Unless the product you’ve picked is simple and does one thing well, or your organisation works exactly how the vendors imagined, or you are willing to inflict something rubbish on users you will end up spending significant sums on customisation and integration.
All of that costs money, and creates some degree of lock-in to the product you’ve chosen. Hell, for some products you’ll probably burn wads of cash just working out what licenses you need.
You should worry about this when:
- You have teams of people who have convinced themselves they need a [fill in particular category of software product here] without a proper understanding of user needs or design of the overall service. Dealing with content? You’ll need a CMS for that, guv. Dealing with customers? Quick, implement a CRM! Too often: sledgehammer, nut.
- You have teams of people advocating buying a software package because it meets “90% of our business requirements”. This usually means they’ve asked people (staff) to guess what they want out of an abstract piece of software and picked the one that seems about right (or the one they had a bias towards from the start). This, like most waterfall software projects, is unlikely to leave your users delighted.
- You have teams of people who have drank the Kool Aid and believe this single software product can do EVERYTHING you need, and more. Yes, software can do pretty much anything you want it to with enough effort. But it won’t be pretty, and it will be expensive.
These scenarios usually play out when you have teams who don’t — or don’t want to — understand the value chain of a service and instead reach for magic ‘black box’ solutions.
You need to understand the components that will make up your service, and then understand whether each component can be simply consumed as a commodity, adapted from an existing software package or needs to be built.
The answer will be different for each component. Can you use S3 to store and serve out your images in your new digital service? Probably, yes.
Can you simply buy an all-singing-all-dancing CRM-cum-CMS, configure some variables and use it to run your whole service? Let’s speak in a few years time and see how that’s working out for you.
Golden rule: the more needs you hope a ready-made software component will meet, the less likely it is that it will work.
Break it down, keep it simple.
As in: “we’ve built a private cloud” As in: “Yes, our service is definitely cloud. Please sign this 10 year contract.” As in: “we bought it off G-Cloud so it must be cloud, right?”
In the tech world, once a word gains enough traction, there will be no shortage of companies trying to use it to sell you stuff.
Usually the stuff they try to sell you bears no resemblance to the original meaning of the word.
Unfortunately in government, sometimes the stars align in a harmful pattern. In this case, when you combine buyers who don’t really know what they’re doing with suppliers who are happy to rebadge their offerings to let the buyers say they’re adopting the latest strategy/policy/standard, you get a real mess.
So, you have suppliers who will sell you special government cloud services. They will even sell you a government-as-a-platform. Brilliant. We might as well just buy that and all take the day off, then.
Common pitfalls include:
- ‘Infrastructure-as-a-service’ that is really just managed hosting sold on a per server per day basis. I’ve even seen some suppliers that have had the gall to ask for upfront payment under these so called cloud contracts so they can buy and install servers (with a lead time of several weeks). Kind of defeats the purpose of cloud doesn’t it?
- Assuming that everything bought via G-Cloud is ‘cloud’. It isn’t.
- ‘Software-as-a-service’ that smells very much like the same software but hosted by someone else.
- Thinking you need your own, special cloud services.
- Buying cloud services in the same way you used to buy boxed software or hardware. If you aren’t treating it as a commodity, you’re doing it wrong.
Then you get the cloud-denier crowd. Those people who say things like “‘cloud’ just means ‘hosted’” or “it’s just virtualisation isn’t it?”. These are the kind of people who will land you with the biggest vSphere licensing bills you have ever seen. Avoid.
Cloud is a thing. Elastic, scalable, highly available. The rest of the wold (outside government) have been doing it for a while now. Cloudwash is only delaying the inevitable.
As in: “digital isn’t part of our core competency as an organisation so we should outsource it all”
Outsource it all? Really? Would we say this about finance or HR — what organisation would consider control of these essential functions as outside its core competency? Which forward thinking business today would outsource control of its digital destiny?
Of course government shouldn’t do everything. Nobody would build a power station to brew a cup of tea. But smart organisations don’t do this kind of all-or-nothing decision making.
Read some Simon Wardley (again). Understand the needs of your users. Map your value chain and plot where each component is along the evolution curve. Pick appropriate procurement, delivery and management methods for each. Get the most out of the market, use agile for what it is best at. Understand what you need to keep hold of. Learn and improve!
We might not hear this kind of sweeping statement expressed so boldly nowadays, but I still get the sense that this lies behind a lot of board level decisions within government — made by people who see all ‘IT’ as a necessary evil, too complicated to get involved with directly — best left to those smart chaps from [insert IT company here].
At best, this kind of strategy (if we can call it that) will lead to spending more money than you need to. At worst, you will will lose the capability to transform your organisation.
As in: commercial-off-the-shelf As in: “the 1990s called, they want their software back”
The term made sense when software came in boxes on shelves, not so much now.
The way it is used in government is often a little confused. The intent is usually to use an existing software component, rather than build from scratch (see also: Buy vs. build). But then the ‘Commercial’ bit muddies the waters — why does the licensing model matter here? Why not OSOTS?
COTS deserves to be called out as it is holding us back. Every time a business case concludes that ‘COTS’ is the preferred option, we rule out consuming better, cloud-based alternatives.
And — even more importantly — continuing to reach up to the shelves for shrink-wrapped software usually means we neglect to ask the fundamentals of what their users need and how their services should be designed to meet them.
As in: “the DevOps team” As in: “we want to buy some DevOps”
A great example of backwards causality. And also a variant of cloudwash — packaging up the same old crap with a new trendy label (aided by willing suppliers).
People who mis-use ‘DevOps’ are obviously missing the point: DevOps is a set of practices to break down the barriers between those who build and those who manage software. It is not a job role, team or something you can buy in.
I’ve usually seen the term misapplied to contracts for teams of people to manage existing software. Often by organisations with little or no in-house capability to either build or manage software, which makes the term even less relevant.
If you see it, call it out and help the organisation understand what DevOps actually is and how it could help them.
As in: “open source isn’t suitable, we need an Enterprise Solution” As in: “Gmail is just a consumer tool, it’s not Enterprise-Ready”
Haha. They got you. Enterprise just means Expensive!
Code doesn’t know whether it is Enterprise or not. Only the sales teams do.
Sure, go down the ‘Enterprise’ route if you want to always be 5 years behind and 10x more expensive. There’s a reason AWS started with devs not suits…
As in: “we’ve just spent the past 10 years mapping our entire organisation’s architecture”
For organisations with lots of technology worries, it is tempting to think that today’s problems can be stopped from happening again with more control, investment in predictions and big architecture-up-front.
So, we find comfort in box-and-wire diagrams and rules. We set up Enterprise Architecture Boards/Design Authorities. Some projects (good-and-bad) suffer at the hands of the bureaucracy, some projects (good-and-bad) find ways of getting around it.
The problems don’t go away, but it often makes it hard to do the right thing. And it is a sure fire way of demotivating teams and breaking down trust in your organisation.
And it’s even worse when one-size-fits-all rules are imposed across products and teams. Mandated re-use, enforced technology choices and dependency hell are a sure fire way of slowing down delivery and delivering crappy products.
These kind of structures divorce decision making from the learning that happens in teams. Architects should be part of multi-discipline teams, with decisions made at the right level (usually by the team itself).
Instead of setting up another architecture board, energies would be better invested in getting people with an interest in architecture across an organisation to talk to each other.
As in: “it will cost us more to leave our current vendor than stay with them” As in: “oh shit we forgot to think about our exit costs’”
Do you have a role in writing or approving IT business cases in government? Are you reviewing one now? Does it account for the full costs of exiting whatever system/contract it proposes at the end of its life? If not, can I politely suggest you make some revisions.
Let’s use a little example here. You currently use a system from Vendor A. The contract is coming to an end, and you’ve just had your new business case approved for the implementation and running of a replacement system.
There’s a better, cheaper and more flexible system now provided by Vendor B. You ask them both for quotes — Vendor B costs significantly less per month but the startup costs are higher. Turns out your own internal startup costs would be higher if you went for Vendor B, too. So you end up sticking with a slightly newer version of your system from Vendor A.
This happens all the time. The costs of change (data migration, staff training, pricing in of risk by new suppliers) will usually break your business case if the exit costs from the old system weren’t factored into the previous investment. And so we get stuck in a cycle that benefits incumbent suppliers…
As in: 5+5+5+5 Also known as: ‘Continuation’ Also known as (by suppliers): the Magic Money Tree
You have to feel sorry for government IT bods sometimes. They really are sitting targets.
The original procurement took an age, but now you’re part of the furniture. You’ve already sold them software that took longer to implement than promised. Their data is now firmly locked into your system, and your system is firmly locked into their estate thanks to those integrations that you control. Nowadays, you’re making pure profit on this contract — just crank the handle and wait for the POs.
They knew the contract was rapidly coming to an end, but their internal bureaucracy is so great that it will be next year (at the earliest) before they can buy a replacement, let alone migrate to it.
So you bring out your magic weapon: The Extension. Up pops the account manager who specialises in these kind of recurring revenue opportunities. Wheel out a few stories about the semi-retired German engineers who are the only people in the world who understand this system. That should justify the £4m price tag you’ve just invented.
This is a commercial scenario where the client holds very few cards. The best way to avoid it is to get your shit together: break the lock-ins and develop a credible threat of actually moving away.
As in: “we’re going to move it all to a new environment before we start transformation”
It’s never that simple. And it’s rarely worth the hassle.
There’s a few varieties of this. Sometimes it means physically moving servers from one data centre to another. In vans. That’s where the phrase comes from. If you can possibly avoid this, you really should.
Other times it means moving systems from one set of infrastructure to another. This is where it gets messy — these systems were probably never designed to be moved in this way. Legacy infrastructure builds were not automated.
In other cases, lift-and-shift means migrating all your data — as-is — from one system to a new replacement system. This is unlikely to be a successful venture.
Lift-and-shift as a strategy for cost saving is unlikely to pay off, particularly if you are moving legacy apps to cloud without any rearchitecture.
Lift-and-shift as a precursor to transformation is stupid. It never works, yet it is repeated time after time by government organisations.
This strategy is extremely common in the world of finance/HR systems, where the promised land of changes to business processes is never quite reached. The organisation usually gets stuck in a thankless few years of data migrations and loses energy and budget to do anything other than a lift-and-shift.
As in: “we asked The Market what we should do and they’ve told us this, so it must be true” As in: “we asked The Market whether a 2 year contract would be OK, but they said they want 5 years minimum”
As if The Market is some kind of neutral, indisputable force.
The answer The Market gives depends on who you ask and what you ask.
I’ve seen a few too many ‘pre-market engagement’ exercises and ‘supplier days’ that leave me depressed at the poor state of technology buying in government.
If you ask the usual suspects and your incumbent suppliers, you will get the same old shit.
The traditional UK public sector technology market is not The Market that John Mill promised us. It is a market with high barriers to entry, asymmetrical information, client capture, lock-in and revolving doors between client and supplier.
The way to tackle this is to hire people who know a bit about technology, a bit about how markets operate and have some common sense.
Example: ask the market for a complete end-to-end police force crime record management service and you will limit yourself to a handful of expensive suppliers. Ask them for a police crime record management system, and there may be a few more.
But consuming the right cloud services, hiring a great software engineering firm and building on what’s open, that’s a different story.
There are bits of government who understand how to play The Market with real strategy. Simon Wardley calls it situational awareness. Studying the players, exploiting opportunities, understanding how things evolve. That’s what GOV.UK Verify and GOV.UK Pay are doing and it’s brilliant.
As in: “we’re going to replace this hunk of shit with a new hunk of shit”
Government loves acronyms. Especially acronyms for big projects. Especially ones that sound a bit scary. Or ones that end in IMS (usually standing for Information Management System).
They have a habit of sticking around. Years later, that HERCULES moniker is being used to describe the project (which is probably still rumbling on), the service being delivered, the system underpinning it, the big contract we’re locked into and the team supporting it. Things are all a little confused, and it’s hard to untangle the mess.
The bit where you should really start to worry is when the initiative to get out of this mess gets called something like HERCULES2 or the ‘HERCULES Re-Procurement Programme’ (which would probably spawn the HRPP acronym). If this happens, you know you’ve lost already. You may as well just hand big bags of cash to your incumbent supplier and go home.
If you’ve heard the theory of nominative determinism, you’ll understand why. The name of a thing has a nasty habit of determining the character of it.
It is tempting to name your project after the thing you’re trying to replace. But unless you want to deliver exactly the same thing on a newer set of technology, avoid this at all costs.
Start at the beginning. Understand your current situation, but don’t let it cloud your judgement.
Remember Conway’s Law:
organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations
Construct your projects around the kind of services you want to deliver in the future, not your legacy systems. And please give it a name that reflects this ambition, too.
As in: “we’ve signed this monolithic contract with a single supplier because we want one-throat-to-choke when it inevitably goes wrong” As in: “we can’t cope with managing more than one contract… we want one-throat-to-choke”
As James Findlay told me once, you may have them by the throat, but they have you by the balls.
Clear accountability is essential, but not at the expense of flexibility and collaboration. Too often, still, contract design becomes an exercise in pre-empting inevitable disputes, each side trying to cover their arses.
Time and time again, government has tried to absolve itself of responsibility for delivery of difficult things. But who ends up in the papers when it fails — the suppliers or the government? Invariably it is the taxpayer who picks up the tab. And I needn’t recount the number of times government has found itself unable to extract its people, processes and systems from monopoly suppliers.
It’s not just in the IT world that government is waking up to this:
It has rarely proved possible to transfer effectively the contractual responsibility for the delivery of major capital programmes to a single private sector entity working in a ‘prime’ role. The private sector has often ultimately been unwilling or unable to take on this level of risk.
Does this mean insource everything? No. Does this mean there is no place for shared risk and reward in contract design? No.
But for as long as the public looks to government to deliver services, the buck stops with government. Trying to transfer risk wholesale to the private sector simply doesn’t work.
In the old days, IT in some government organisations swung between two patterns:
- ‘Best of breed’: a large array systems glued together with lots of point-to-point integrations, data replications and components like Enterprise Services Buses.
- One system to rule them all: a single, ‘integrated’ big system — usually an ERP or variant — serving lots of different needs across a big organisation.
Needless to say, there were lots of problems with these patterns.
‘Best of breed’ was usually anything but — they were usually crap line of business applications. The systems were rarely integrated, leading to a mess of data, systems and crap UX. But at least the Widget Team had the illusion of control and choice of system.
The alternative was worse: one-size-fits all, monolithic death stars that rarely met their promise of efficiency and integration. By the time you manage to implement the system (eventually) across your organisation, you’ll have blown the budget and realised you are locked-in to a world of expensive consultants, licenses and change costs.
Then along came the platform. At long last, we could have our cake and eat it. We could reap the benefits of re-use and data integration, yet still retain the flexibility to meet a diverse set of needs across an organisation.
After all, a platform beats a product every time. Right?
Except that platform is just a sales term. It can mean everything from a rebadged ERP, to a big, customisable CRM like Salesforce to an open PaaS like Heroku. Some of these are duds, some of these may be useful.
And then government started calling things it was building itself ‘platforms’ and the whole thing got even more meaningless.
Some of these were brokers for utility services; others were application services; others were collections of hosting and application services strung together to support multiple digital services.
Very few of these were platforms with ecosystems, as in the Airbnb or iOS/App Store sense of the word. Mark Thompson has written about this.
Platforms are no magic bullet. You still need a strategy to balance lock-in, agility and re-use.
As in: MyAccount
Carrie Bishop put it well:
“I really wish I had one place where I can see all my transactions with the council”, said nobody, ever.”
Portals make sense in a world that revolves around your organisation, where users behave in ways that you’d like them to online.
It’s a world where your organisation runs on big trusted systems hosting lots of data.
It’s a world you will let your users peer into by drilling some small holes into those systems — literally portals into another world.
It’s often well meaning — ‘let’s give users access’, ‘let’s help them self-serve and save our organisation money’.
But — outside of government — the portal era ended long ago. Turns out the world, and the world wide web, is a lot messier that a portal can ever account for.
The truth is:
- People don’t care about your organisation as much as you think they do.
- Search rules. Homepages — personalised or not — do not.
- Your assumptions about the needs of your users are probably wrong.
- People care far more about the services you provide than your organisation as a whole.
- The benefit of logging into an online account rarely outweighs the cost to the user, especially for one-off or semi-regular transactions.
- There’s better ways of providing digital services than portals. Try harder.
- If you can possibly avoid giving people another password to remember, please do.
Portals are not built for this messy world. Any monolithic approach will struggle to cope with the variety of services a typical government body provides.
The main problem with portals is that they are usually inside-out, back-to-front: a ‘front door’ on top of your internal systems, processes and language. Much better to take an outside-in, front-to-back approach: users first, service design second, systems and organisations last.
As in: “can you provide your roadmap in MS Project format?”
Wow. Now here’s a tech term that’s been adopted and abused by HM Government.
In the digital world, a roadmap is a useful tool to show where your product is going, build consensus and point teams in the right direction. Like on real road maps, there are different paths products can take.
Yet in government, there are some very dubious ‘roadmaps’ knocking around.
Roadmaps with some very specific — and entirely fantastical — dates and deliverables on them.
Other roadmaps that smell very much like Gantt charts: 100s of lines nobody will ever digest and a missed opportunity to rally everyone round a simple articulation of where your product/service is going.
And some roadmaps that, err, aren’t really roadmaps at all. More like policy announcements. But ‘roadmap’ sounds cooler, right?
Also watch out for: any claims that a piece of tech will enable your organisation to avoid dependency on software developers
It’s tempting to think that we can rid ourselves of the complexity and expense of writing software.
Wouldn’t it be nice if we (as in: we the non-techie people) could change how our systems work without having to resort to hiring developers?
Enter — The Rules Engine:
Often the central pitch for a rules engine is that it will allow the business people to specify the rules themselves, so they can build the rules without involving programmers. As so often, this can sound plausible but rarely works out in practice.
That’s not to say there’s not value in policy as code, as Richard puts it. It would be fantastic to expose the policy, rules and processes of government in this way — readable and testable.
But that’s not usually the motivation I see when organisations say they want a rules engine. What they really mean is that they want the software developers to build them a magic box that they can control themselves and then bugger off.
In the long run, that’s neither achievable or desirable. Because:
- The rules are just one part of what should be an evolving system. Keeping the rules alive but freezing everything else is a sure-fire way to end up with a system that breaks.
- This being government, the rules end up being super-complicated. If you have a system that allows you to change and add rules at will, you’ll end up in a mess. A smart programmer could help design out some of this complexity, perhaps breaking down your one big rules engine into a few different services.
Lesson: don’t kid yourself. Every organisation runs on software now. Stop trying to deny it by adding another layer on top and hire yourself a developer or two.
As in: “we’re drowning in all these system changes at the moment… but we’re aiming to get to a steady state by next year”
Welcome to fantasy land.
Yes, no doubt all these changes are causing you a lot of pain. That’s because your systems were probably not built to cope with all this change.
But instead of putting in systems that can cope with change, government too often goes the other way and tries to crack down on change.
So we end up with long release cycles, bureaucratic procedures and change requests.
And we aspire for the promised land — where we can get to a ‘steady state’ where our software is ‘done’ and we can move on to other things.
It’s a doomed strategy. It’s never ‘done’.
The rate of change is increasing and the level of certainty about what government does and how it does it is is decreasing.
Change is one of the few certainties we have, and we have to build services that can cope with this as a matter of fact.
Target Operating Model
As in: “we’re pleased to announce our Target Operating Model will be for each team to produce a new Target Operating Model on a weekly basis”
When will it end? How many more consultant man-hours must be wasted on creating these fantasy diagrams?
Target Operating Models (TOMs) are not unique to government, or IT, but they seem to have never-ending popularity within government IT.
The laudable aim of TOMs are to outline:
the desired state of the operations of a business.
The TOM is supposed to set out how an organisation’s operations will change.
It’s usually usually a tool reached for when a newly-arrived CIO realises that things aren’t working as well as they should. The processes are a mess, the structures are siloed and confusing — we need to change all of this in order to run a better organisation/deliver better systems!
The only problem is that the TOM is an exceptionally shitty tool for improving the way your organisation works.
It ignores culture and human factors of change.
It places an over-emphasis on structures and processes.
It usually ignores incremental improvements in favour of ‘big bang’ organisational change.
It usually goes into way too much detail before you know the shape of your future services, perhaps before you’ve written a single line of code or perhaps before you’ve spoken to a single user.
Worst of all, they are a fantasy. TOMs are sold on the lie of an ‘end state’ that doesn’t exist.
Yes, some thought needs to be put into how best to run services. But there’s no need to turn this into an industry.
Matt has written some better words on this here.
As in: “we’ve disaggregated our single monolithic IT contract into several monolithic towers contracts”
There was a trend in government IT — a couple of years ago — to something called the ‘towers model’. It was around the time that it was clear that the prime contract model was on it’s way out but the buyers and suppliers of government IT didn’t have much clue about what to do instead.
The towers model was disaggregation for people who didn’t like disaggregation.
It was a carve up. Literally. All the towers model did was split out one big contract into several slightly smaller contracts without doing any real transformation. As Alex wrote:
It combines outsourcing with multi-sourcing but loses the benefits of either.
The ‘towers’ were the vertical bits — the system/software/line of business silos, deepened by throwing a different contract around each one.
The horizontal contracts at the bottom and top were usually hosting and Service Integration and Management (aka the bit government should be doing mostly itself).
In practice, government ended up with a bunch of expensively-written contracts (usually with the incumbent suppliers, or their subcontractors), lots of finger pointing and little internal capability to make it all work.
Government was left with the same old technology, the same old ways of working and usually the same old suppliers. Nobody else would be willing to step in to this mess — not without pricing in enough risk to price themselves out of the running.
The towers model was — and is — a safety blanket for government IT. It’s time to let go.
As in: “we’re looking for a turn-key solution”
This is the point at which government gives up completely, bends over and lets suppliers do as they please.
Turn-key solutions in government have come to mean either custom or ready made systems provided by suppliers where the customer merely has to ‘turn the key’ and it all magically works.
You guessed it, it’s never as simple as that.
Of course there’s the usual pitfalls of this kind of approach (see Enterprise, Buy, COTS).
But there’s something more insidious about ‘turn-key’.
First, it implies that you can just sit back and wait for the perfect system to arrive. No need for customer-supplier collaboration. You’re just the person with the key waiting for the that lock. You’ve written your specifications — all 200 pages of them — what could go wrong?
Second, these are usually turn-keys to dead locks: turn-key systems don’t tend to evolve. Your turn-key solution may fit your needs today, but is unlikely to do the same when you turn a slightly different key in a year’s time.
Finally, there’s a price to be paid for turn-key. That’s because turn-key is another word for black box.
Turn-key means that you — as the customer — don’t care how it is done (“don’t ruin the magic trick for me!”), you just want it to work. You specify how the system needs to work when you put the key in it.
The penalty you pay for not taking an interest in what goes on inside the box is usually significant markup (aka Idiot Tax) and high potential for lock-in.
If you’re selling ‘turn-key solutions’, shame on you. You know it’s never that easy.
If you’re buying them, more fool you.
As in: “95% of our users said they want an integrated online account” As in: “we’ll ask our user representative at the next programme board”
Let’s say you’ve heard about this user research thing. Let’s say you pay lip service. Heck, let’s even say you believe it’s a Good Thing.
But then you go and do something stupid like…
- Treat user research as a time-limited, sequential activity that precedes delivery, rather than something that you do every step of the way;
- Pretend that stakeholder engagement or user groups are user research;
- Commission market research surveys and pass it off as user research;
- Say that we don’t have the time/money to do user research;
- Argue about sample sizes;
- Manipulate the findings to justify what you wanted to do in the first place.
There are many user research sins. These are but a few.
There’s a feeling amongst some in government that user needs must somehow be balanced against ‘business needs’, or ‘policy needs’. What I think this reveals is that some people in government feel this whole ‘user needs’ thing has gone too far. That it’s time to reign it back in; that users are getting too big a bite of the cherry, don’t know what’s good for them and will milk public services if we don’t step in.
There’s some pretty rotten assumptions hidden (some not so hidden) in here. It’s never said explicitly, but what’s implied is that the public are variously stupid, greedy and subservient to the needs of government. We assume the worst in humanity. As Alex put it:
This will to punish sits at the heart of a large number of interactions between government and the governed. We don’t trust you. You’re asking a stupid question. Why do you want that?
Doesn’t sound like public service to me.