Clearing the fog
Using outcomes to focus organisations
Governments and other very large organisations are complicated things, often the product of decades of accumulated bureaucracy, structure, and habits. Those who work within them can feel how all the parts don’t always fit, or quite work, together. At their worst, they have all the structure and process of a hundred different organisations, competing and conflicting, while attempting to operate as one giant.
One problem is that organisations are not usually structured around their underlying purpose, or the core jobs that they fulfil, in the best way. It’s rare within a large organisation to hear a commonly understood sense of what exactly the organisation does and why, or see universal language in use that describes it. In light of this, it’s typical for different parts of the organisation to craft their own narratives and sense of what it means to be productive and to deliver good work.
In this article we look at root causes of why large organisations can fail to act effectively when designing, building and running services, and make recommendations for how to improve them. It’s based on years spent working with some of the largest, most change-resistant organisations (both private and public sector), re-positioning them to design and operate radically better products and services. This also builds on the ideas and work of many others, most notably Ayesha Moarif, Ross Dudley and Theo Blackburn.
A difficult problem to solve
There are many very high level indicators of overall success for large organisations — stock price for a listed company, a broad policy area goal (like crime rate or public health statistics) for a public body, as well as often used secondary indicators such as net promoter score or customer satisfaction. The problem with large measures like these is it can be impossible to draw direct causal connections between most things happening in a large organisation and such broad overarching metrics. This makes them rarely directly useful in influencing any practical decision about what to do.
There are also many more specific (and sometimes competing) things that different parts of an organisation want to achieve. Some of it is by design: different professions hold different values and world-views. Some deeply value excellence of craft within their own field, some focus on keeping work on time and budget, some want to minimise risk, and so on. Individuals can be more or less proactive in relating their day to day work with any broader context or goals, for a variety of reasons.
The existence of each unit, project and team can also be easily justified on its own terms. The presence of self-contained units such as comms, marketing, strategy, operations, security, infrastructure management, IT, procurement, business management or a project management office, can be reassuring for some in a traditional context. However these entities are often working with their own internal agendas and working practices. Organisations that then add digital capabilities to the cast — technology specialists, product management, designers — may recognise that ‘they work in different ways’. But merely introducing whole new sets of people evidently doesn’t compel a large organisation to change their existing ways of working in all overlapping, lateral and vertical areas, especially if the leadership isn’t yet sure what exactly ‘working in different ways’ really means for their organisation. And so they continue to accumulate new layers, new teams, new departments, new practices. This can create an atmosphere of conflicting and overlapping goals and interests. It’s no wonder organisations don’t always work with a coherent focus.
All of this results in actual bottom-line problems, many of which are well-documented (and often the source of ongoing pain to the organisations themselves), like:
- “cost-shunting” — such as when central government makes cuts in welfare costs, which creates new burdens on local authorities
- on-time delivery being seen as more important than whether a solution is needed or useful
- services that many people try to use becoming so complicated, confusing and disjointed that users waste time and make easily avoidable mistakes (or even break the law in some government contexts)
- increased costs — which are often significant — caused by problems with a service, resulting in the need to offer additional support, for example through contact centres for those who wouldn’t ordinarily need it, deal with people failing to do the right thing in other ways, or taking enforcement action, rather than just designing services better to avoid or minimise these problems
There are plenty of examples. For some organisations, these kinds of structural issues multiply and reinforce each other, to the point where organisations become unable to provide a viable working product or service. The common problem is that the goals of a single part of an organisation can seem sensible in their own context while being harmful to, or missing opportunities to improve the whole.
What to do
Have a shared definition of success
The first step is to establish shared organisational goals that reflect the outside performance of the organisation instead of its inside structure. Ones that everyone knows, aligns to, and bears in mind alongside the specific aims and goals of their team, unit or profession.
We’ve likely all seen bad examples of high level statements — woolly or hypocritical mission statements, platitudes, or things that aren’t relevant to anyone’s actual job. So it’s important that these shared goals are both specific enough to be useful for decision-making and stated in plain English so they make just as much sense whether you are inside or outside the organisation.
One way to define success precisely enough for the many organisations that mainly provide services is to start by defining these services through their successful outcomes. A successful service outcome should mean that a user had their needs met, while a core business or policy goal was achieved.
This may sound like a simple idea, but it’s generally not how large organisations and governments look at things. Instead success is defined in all manner of other ways except successful outcomes, such as:
- very broad aggregate figures, e.g. stock price, crime rates, education levels
- things that aren’t really useful for anything in themselves without context or explanation, e.g. number of people doing a particular thing, or ‘satisfaction’ with something
- things that are about how something is done, rather than about good outcomes, e.g. proportion of times something is done digitally, or on a mobile device versus another channel
- things that sound like success metrics that can become perverse incentives, e.g. only thinking of cost cutting rather than fundamentally redesigning root causes (so we cut valuable aspects of a service, shunting costs elsewhere), numbers of newsletter subscribers (so we make it hard to unsubscribe), minimised numbers of support calls (so we make the phone number hard to find, making the overall service worse) — in short, aiming at anything but actual overall success.
As an example of a better way to define success, take the UK passport office. A way to describe its main purpose is to ‘provide a secure way for British citizens to be identified when travelling between countries’. In practice, this boils down to providing a service: getting citizens secure passports, that work where and when they need to, in time for when they need them. This is the central premise by which the entire organisation evaluates itself: the whole organisation has been successful each time that job is successfully done. And to define ‘successful’ means understanding what e.g. ‘a secure passport’ that ‘works’ means to the various interested parties — users, security and fraud experts, border control in the UK and other countries, the travel industry, suppliers — but the ultimate shared goal remains the same.
Something to watch out for is that it’s not uncommon for people in large organisations to assume the whole organisation is doing just fine. They may point to a customer satisfaction survey or commercial success as proof of this. Or there might be units, functions, projects, bodies or forums tasked with ‘making the organisation or portfolio more cohesive’, such as oversight, governance, or assurance, that suggests that there are full or reasonable efforts to do this. And these initiatives may indeed help — but they can also be only partially effective despite their best efforts, or at worst a fig leaf for organisational dysfunction.
The more unexamined the belief that everything is working “just fine”, the greater the likelihood of invisible, unaddressed problems continuing to fester.
Evaluate the whole (to connect the parts)
So far we’ve defined success for the whole organisation in terms of it meeting both user needs and core business/policy goals. Now we can move on to evaluating how well the whole organisation is performing using those outcomes.
For organisations that provide services, here are some useful questions:
- How many people could or should be using this service? (Size of audience / market)
- How many potential users are aware that they can or should do so? (Is there an awareness problem?)
- How many potential users are doing something about it? (Is there an activation problem?)
- How many potential users find their way to the service? (Is there a discovery / routing problem?)
- How many users get through all the stages to successful completion without dropping off on the way for any reason? (How is the service performing?)
- How much time and effort does it take to get to the right outcome?
- How much time and effort is expended by the provider to get each client, user, customer or situation to the right outcome? (Is there an internal efficiency problem?)
- Is performance lower for any group of people for avoidable reasons, such as lack of support for access needs? (Is there a service performance problem that is potentially all the worse for being discriminatory?)
- Where are we seeing confusion, failure, drop-offs, wasted time, mistakes or need for help or support? Often these accrue in handover points between different organisations or parts of organisations.
- Are there parts that are generally known to be ‘just awful’, but don’t get fixed because they still ‘just about work’? Often bad solutions persist because the impacts of their problems are indirect, hard to measure and felt by someone else. But organisations shouldn’t accept they operate poor services, even if it’s difficult to quantify the harm in terms of success, inclusivity, satisfaction, increased support costs, drop-offs, reputational or brand damage, reduced advocacy, or other possible negative effects.
And, most fundamentally:
- What would be the best way to get to the successful outcome? Is our current way really the best, simplest way to get there? Are we missing significant opportunities to do things differently? How could it be better, faster, and have fewer points of possible failure?
You can look at the performance — the rate of successful outcomes — for most services as a funnel, with a sequence of possible drop-off points. As a general rule: taking away effort, complexity, and room for confusion in the service gets more people to a successful outcome, which means a better success rate for the whole organisation.
The key point is that individual parts of the organisation that are delivering or building services don’t typically view things from the whole organisation or whole service point of view. They could see their immediate tasks or historic working practice as taking precedence, they may be under pressure from stakeholders to be ‘heads down’ and focus on their tasks at hand. It is important for leadership and middle management, as well as people working in various parts of an organisation, to proactively choose to focus on the shared outcomes.
A small manifesto
Many of us work with and within imperfect organisational setups. Many leaders recognise the need for radical change in their organisations but aren’t sure where to start.
One compelling argument for how to address this is that government and large organisations should — and can be — radically re-structured around the things they need to achieve and deliver, for example by building new ‘digital institutions’. If service design was not constrained by our organisational divisions and structures; if we had more shared, stable infrastructure that allowed flexibility of design and speed of implementation, government and other organisations could be more efficient, effective, transparent and significantly cheaper to run. This would mean organisations that are more fit for purpose, that make better use of the possibilities of the internet era, and are more ready for what’s likely to come.
This level of radical change takes time and is often outside of anyone’s individual control. In the meantime, there are practical things we can do today to set and maintain our organisation’s focus on its fundamental shared goals — and the products and services through which they’re achieved:
- Clearly identify what the organisation does, using ‘view from the outside world’ language and plain English, so that it makes sense to everyone, including users. Example: a UK government department, the Home Office, made a list of services. Not as it knew them previously from an internal perspective, but as end users would know them, with each one underpinned by clear policy goals and intent.
- Consider significant changes made to policy or strategy in terms of outcomes to achieve and the services, products or interventions through which to do it. Separate the strategic intent from the means to achieve them, to avoid ham-stringing expert teams and allow room to learn what works best over time.
- Understand how services are really provided across organisations, unpicking the layers of technology, operational support, shared products, suppliers. Work to make this internal complexity invisible to the user as much as possible and wherever appropriate. Avoid only having views of a service from just one perspective (e.g. only technical architecture, or only what users see, or only data flows, or only which organisational actors are involved). Work together with others to do this. Example: step by step overviews of how a service works or will work, so that everyone comes together to figure out how to prioritise improvements.
- Find out how well services are performing. That is to say, whether everyone (in the real population of users) manages to do what they need to do without complication, confusion, error or wasted time — and where problems arise and why. And whether everyone, and every ‘thing’ behind the scenes (e.g. database, system, interaction, algorithm, protocol, process, procedure, mandate, infrastructure) are taking the right actions, without wasted time, without avoidable error and in a way that doesn’t make humans hate their jobs and human users hate their lives. Remember that just because we don’t usually get to see direct measurements of these things without trying hard, or that some things are hard to measure, doesn’t mean that performance is unknowable.
- Make performance visible across the organisation. Take it into account to inform decisions about strategy and investment and drives for efficiency or improvement. Stop the kind of discussions that focus only on one abstracted section of a service (e.g. outsourcing a collection of processes, converging technical architecture, implementing a helpdesk system, or debating whether something should be an app or a website) while ignoring how it impacts service outcomes.
- Describe all the work the organisation is doing clearly and simply, and where at all possible using the outside world context. Articulate assumptions about how proposed work could make your organisation better meet its goals, how it improves service(s) and how we are likely to be able to know this. If someone external to your organisation wouldn’t understand the language and concepts you use to describe your project or your business case, the chances are you don’t have enough clarity to manage it well either.
- Invest in new work only on the condition that it is likely to be a good investment in making an organisation successful towards its shared, specific, goals
- Manage ongoing investment (i.e. continue to fund, change or stop work) based on whether the work is proving that it’s likely to increase success on the same basis — not just whether it’s on time to deliver certain outputs.
- Prioritise fixable problems that contribute to services underperforming. Use a variety of mechanisms to do this. For example a) continuously funded service teams with a backlog of improvements to make for one or a collection of services, b) specific one-off change or redesign projects to make a defined improvement, c) including service improvements as part of other work, such as shared product or infrastructure changes. Be clear about what should change, why, the evidence for and against it and your assumptions.
- Use shared goals to define success and justify work. Internal teams, including operations, should be able to explain their existence and how they contribute to the shared specific goals, wherever applicable. Stop reporting numbers that really just describe ‘how busy we’ve been’ in favour of showing ‘how we are working towards outcomes in better ways’.
- Use desired outcomes as the basis for (re)designing services. This is a higher cause than serving our existing organisational structures. The best service is usually the shortest and most direct path to the right outcome. Asking ‘how could we achieve this outcome more successfully’ is more constructive than asking ‘how do we innovate’, ‘how do we use artificial intelligence’ or ‘how do we make the experience more compelling’, even if some of those end up happening. Or coming at it another way, evaluating all the bright ideas, to examine whether they’ll have the most significant impact on desired outcomes, compared to something else we could do.
If the existing structures of an organisation are not yet elegantly aligned to such top-level shared outcomes and goals, but we still want to work in and with them, we have to set out a clear sense of purpose together. And to maintain this clarity, we have to work doubly hard to avoid being distracted and confused by the noise of organisational complexity, different working cultures, us-versus-them worries, or local incentives — whether you’re a product owner, a front line worker, a board member, a technical architect, a user researcher or from the project management office.
And when you look at it this way — when you keep your sense of purpose clear, and keep relating everything you see back to the shared goals, helping everyone around you to do the same — maybe we’ll find we’re all working together after all, even when organisations feel complex, difficult or reluctant to change, and the products and services we work to provide will get better for everyone as a result.
This article isn’t exhaustive, and we will continue to improve the points discussed. We’d love to hear about other things you’ve tried that may help organisations build better products and services, and we particularly welcome the challenge and critique needed to make this a useful shared resource.