Don’t pay twice for government IT!
Last year Haley Van Dyck gave a TED talk about the state and progress of digital government in the US. So far I only read the blog post and saw a lot of positive reactions on social media. It seems she hit all the right spots around digital government transformation and I look forward to watching the full talk.
I am not sure Haley explicitly made this remark, but I imagine she implied that citizens pay twice when government IT does not deliver: first in terms of wasted spending of public funds, second in terms of poor public services received. At least that is what I have come to see over the years analyzing digital government performance.
Haley used some numbers to illustrate the magnitude of the problem the US government faces. She said that: “
• $86 billion is spent a year on federal IT projects
• 94% of federal IT projects are over budget and behind schedule
• 40% of them never see the light of day — they’re scrapped or abandoned”.
In this post I want to compare the US numbers to some international data in order to show how digital government pains in the US are similar to those of other countries. Much of what follows is based on my work at the OECD where one of my tasks was to improve measurement and comparability of digital government performance across countries (note that the European Commission has been producing great data on “e-government” with a major focus on user take-up and satisfaction; our work focused on collecting and analyzing data from the administrations themselves).
A problem of quantity: Government spending on IT
Despite evident methodological hurdles you can get interesting data and draw comparisons between government spending levels on IT. The US federal government spent $75 billion on IT in 2011, which makes it by far the biggest spender in absolute terms across 34 OECD countries. Following up: United Kingdom ($10 billion at the national level), Australia, Canada and France ($4 to $5 billion each).
These absolute numbers are big. And they are important, no question. But how do they compare to government spending overall? In a relative comparison the US government finds itself next to Canada — both spent around 1.9% of their 2011 government budget on IT — and is topped only by New Zealand whose national government spent over 2%. In fact, quite a few central (read federal/national) governments dedicate between 1% and 2% of their annual expenditures to IT investments and operations.
A problem of quality, 1: Execution of government IT spending
Quantity is obviously just one side to the story. This is why the blog post on Haley’s TED talk provides information about quality of those billions spent on government IT. She says that 94% of federal government IT projects are over budget or behind schedule. And 40% are being abandoned at some point.
These numbers seem mind-boggling — how is it that the government of the country that is home to the Silicon Valley and dozens of global technology leaders cannot keep its own IT projects on track and on time? And yet, other countries have very similar challenges many of which stem from decades of “big IT” project thinking in the public sector. The problem is that bigger does not necessarily mean better. Research in fact suggests the opposite is the case: the bigger a government IT project in terms of budget and duration, the higher its propensity to go over budget and over time, and to under-deliver.
So how many “big IT” projects do governments actually run? Across a sample of 23 national governments we found 579 projects worth over $10 million each; almost half of those projects had a planned project duration of three years or more. The number of projects would be much higher — possibly double — if we had been able to get comparable data on the US. But even without the US there are a few national governments that display a tendency to go really big on IT: Mexico (which ran 120 “big IT” projects in 2014), New Zealand (82), Japan (70), Australia (52).
The blog post on Haley’s talk does not mention supplier concentration although this is directly linked to “big IT” prevalence in government. In the United Kingdom 80% of government IT projects in 2011 were executed by only 18 suppliers. Limited competition among a handful of suppliers creates higher incentives for customizing off-the-shelf solutions than for re-designing from scratch and understanding the real needs of the immediate client (government) and the final client (citizen). It’s not all about the suppliers, either. Many government institutions simply lack the policies and capacities to procure or build IT solutions that go beyond standard.
A problem of quality, 2: Poor spending leads to poor services
Unwise spending of public money is a problem in itself. But citizens bear the bill a second time because bad spending on IT leads to poor quality of public service delivery. Haley illustrates this with another impressive number: “137, the average number of days that a [US Army] veteran has to wait to have their health benefits processed”. That’s more than 27 weeks, or roughly half a year.
Many delays in government service delivery are not because the underlying processes are complex per se. Most often the process is just designed and executed in unreasonably complex ways. It’s still striking every time I come across an institution that exchanges data with another institution by printing and shipping paper files that are then scanned and processed digitally again at the receiver’s end.
Governments are realising that processing information in 20th century ways negatively affect society in ways that should not happen in the 21st century. Take Finland, a long-time technology leader, which now feels the consequences of decades of incremental development of its taxation IT systems and data exchanges. In some cases it can take years between an amendment of tax legislation and its full implementation in the government’s interwoven information systems (PDF, pp. 210 & 220). This means the government loses capacity to swiftly react to economic and societal challenges.
During the same project I learned about a paper-based data exchange between Finland and neighbouring Estonia. Many Estonians and Finns live, work or run businesses across the border and this results in administrative data exchanges in areas like social security. Over the course of one year more than 9,000 so-called A1 social security forms are being printed and sent from one Estonian government agency to its Finnish counterpart, which then scans, stores and processes these files in its own information system (PDF, pp. 250–251). Result: it takes an average four months to validate an individual’s social security claim if interaction across the border is needed.
Let’s take a last example from the judicial branch of the state. The median “disposition time”, i.e. the average time it takes to solve, for a civil or commercial litigious case in first instance across European OECD countries is 181 working days. This translate into less than one year and sounds about OK to get through a court case where most of us would want diligent and meticulous work to be done.
But how much diligence and meticulousness are citizens ready to take when a court case takes an average 590 days — over two years — to solve? That is what happens in Italy, which tails the ranking. Italy is followed by several countries that have average disposition times of one year or more: Greece (469 days), Slovak Republic (437 days), Slovenia (437 days), Portugal (369 days), Finland (325 days), France (311 days) and Spain (264 days). It’s not entirely surprising that — with the exception of Finland — those countries’ citizens have little confidence in their judicial systems.
Huge delays and low confidence in public service delivery is not all down to poor IT. But legacy processes and systems do play a major role. Maybe Haley talked about the challenges posed by legacy systems. In any case, the US federal government’s draft budget for 2017 proposes to create “a $3.1 billion revolving fund to retire antiquated IT systems and transition to new, more secure, efficient, modern IT systems” (p. 76). Re-engineering of legacy systems and processes can go a long way in alleviating the pains citizens feel due to delays, uncertainty, and errors in public service delivery; and in rebuilding confidence in government ability to deliver quality services.
Problems for sure, but solutions too
To re-cap, some of the major obstacles governments face today when modernizing their IT footprints are similar around the world: the tendency to execute “big IT” projects that turn uncontrollable, a lack of interoperability between different information systems, a long list of legacy processes and systems. As a consequence, large amounts of public money are not wisely spent and many public services are poorly designed and delivered. This is what I mean when I say citizens pay twice for bad government IT.
What are governments doing to address these issues?
For one, a consensus is growing that we should move away from “big IT”. Today’s services need to be built in agile ways that favour “chunking” of projects, rapid prototyping and early user testing, iterative development and continuous deployment. That’s not easy for government agencies used to drawing up large tenders, signing multi-year and multi-million dollar contracts, and waiting for the “big bang” delivery day in a distant future. Luckily, agencies like USDS and 18F in the United States, GDS in the United Kingdom, or the SGMAP in France are catalysts of the necessary change not just within their own countries, but also internationally.
Information systems and public services can no longer be regarded as insular and singular. Interoperability is on every public sector reformer’s agenda. It needs to be on politicians’ agendas too. Otherwise administrators will continue to create bilateral agreements and point-to-point solutions for data exchange that have led to today’s entangled information system landscapes (I liked the expression “spaghetti systems” a government CIO used in a conversation). Estonia early on took a decision to future-proof its government information systems by defining common standards and interfaces for data sharing between institutions (see this appreciation of the X-road by UK GDS).
Last point, governments are addressing legacy systems. Infrastructures and systems built decades ago are rarely able to cope with today’s, let alone tomorrow’s requirements. I will mention Estonia again because the government does not stop at future-proofing its data exchange infrastructure. It is going a step further by means of a “no legacy policy” whereby government information systems should be re-designed or replaced after 13 years of service (PDF, p. 206).
A lot of genuine transformation is happening all over the world. So I am positive we are on the right path towards cutting the bill for government IT in half — meaning that citizens one day soon will pay only once (IT spending by government) and then reap the benefits (good quality public services).