The role of truth in designing interfaces to public services
Trade-offs can exist between the user experience and full transparency. What should you prioritise?
One of the most consistent and interesting pieces of service feedback I see comes from users of our repairs service. This is a digital service for council tenants and leaseholders to report a fault with their housing, so that we can come and fix it. Time and time again, service users make a suggestion that would make it more time-consuming for them to request the help they need.
Let me explain. There are many different things that can go wrong in someone’s housing. It might be electrical, it might be plumbing, it might be carpentry. The right person to fix the problem will depend on what type of fault it is. Once we know which skill set is required, our system can automagically check availability and get a tradesperson scheduled in to do the work.
Getting the right person to the right location is what the online ‘Request a housing repair’ form is all about. Of course, there’s a lot more to the repairs service than just the initial request. The responsiveness of the team, the quality of the work, the communication with residents — these are all aspects that make up the full experience of the service. But the request form is the entry point for users.
Know your limits
What we don’t try and do in the online request is fully diagnose the problem. Our user research confirmed that it isn’t always easy for the average citizen to understand what the fault is in their home — that’s part of the reason they’re seeking help from a professional. Even when it’s clearer what might be causing the problem, there’s still a lot of conjecture involved until something can be taken apart and examined closely.
For that reason, and perhaps counter-intuitively, we find there’s rarely much value in a resident saying anything more than which part of their house or flat is showing wonky symptoms. We ask residents for the part of their home where the fault is occurring, so we can determine which expert trade can help. That’s all we need to know about the issue, so that’s all we ask.
Users want to give us more though. Every week, feedback comes in like this:
“It would be better if there was a section where we could explain a little bit more about the repair needed”
Some are altruistically motivated:
“A box where the tenant can input details about the repair to avoid 2nd appointments which will save money for the council”
“The site is good, but I thought the better the details we could give about the repair, the better it would be for person coming to repair the problem. There is nowhere to state this but perhaps this is ok with you.”
Some are almost artistically motivated:
“There should be the facility to be able to expand by free writing”
“We should be able to type a narrative to describe the problem more specifically”
The question protocol says: only ask a question if you need the information to provide the service. It’s a great rule for designing forms that meet the needs of users, and that avoids the time and cognitive burden of overly bureaucratic exchanges with government. But here we have a situation where a lot of users want us to ask them for more information, when we don’t need it. Should we?
Our instinct is not to waste anyone’s time requesting information that doesn’t help us operate the service. Not to mention: asking the question introduces a potential obstacle, when we’ve already established that it’s difficult for many residents to confidently describe the nature of the fault.
But residents are expressing an unmet need. They worry that they haven’t given us enough information. They worry that they won’t get their hot water or heating fixed as a result of that. Our service deals with the practical needs, but for some people the service doesn’t meet all their emotional needs. And the curious implication of this, is that one way to address this need might be to start asking questions for which we don’t pay any attention to the answer.
Arguably this possibility isn’t so unexpected if you’re used to focussing on users’ emotional needs on par with their functional needs. Cognition is complex. It does tilt at some interesting joins between service design and policy. The boundary between needs that could be met and needs that should be met is not always clear. Knowing what to expect from a service (‘peace of mind’) is a common need. If a user were genuinely more reassured by a government officer verbally confirming their status than by a computer screen with the same content, how do we choose to respond to that? As Stefan Czerniawski observed some time ago:
“Public sector service providers have tended to be slow to recognise that there is a social as well as transactional component to any exchange with a customer — the customer who feels respected and engaged with as an individual will come away much more positively than the customer who is made to feel like a case file or a nuisance — even if in some objective sense they have received exactly the same service. That has big implications for the management of public services…”
Pay no attention to the man behind the curtain
Foley art in film-making is the process of adding audio to visuals in post-production. One of the reasons it’s used is to augment or replace the actual sound recorded during filming with audio that seems more authentic in the finished product. It helps correct instances where the viewers’ expectations of how a scene “should” sound doesn’t match reality: an untruth to make something seem truer.
There are many online contexts where a similar divergence occurs. Computers that run faster than seems credible bend our trust that a system is working in an effective way. This is especially the case for things which are perceived as intensive, like running security checks or processing large quantities of data. Design responses include deliberately slowing down authorisation processes to seem more ‘secure’, or making price comparison searches artificially slow, lest the results seem canned rather than real and personalised. Many of our perceptions of value are irrational — if something seems too easy, we infer that it can’t be very good. So the labour illusion demands that visible effort is exerted, to justify the worth of the product or service.
These design illusions are used in the physical world as well, helping people navigate public life. Buttons to make elevator doors close more quickly, or to hasten traffic at a crossing, all offer an indication of user control or influence over their environment. In actuality these buttons have no effect, but serve as a palliative to the injury of modern impatience. You feel like you’ve at least hastened the inevitable. It’s not true, but that sense of control, even when a placebo, reportedly reduces stress and promotes wellbeing.
Perception is reality. An illustrative example of deliberately designing a service to be less efficient to improve the user experience comes from Houston Airport. Management were besieged by customer complaints about lengthy waiting times. Increased resources that brought waits down to industry averages didn’t seem to stop the complaints. At this point they changed tack: rather than trying to improve the speed of their baggage delivery any further, they made the walk to the baggage claim area worse. Passengers now had to walk six times as far to get their luggage. And they loved it. Complaints about wait times virtually disappeared, because the idle time once they’d reached the carousel was considerably shorter. Perceptions of the service were better, despite an identical outcome and a more inconvenient route to it.
With great power comes great responsibility
These examples raise an interesting paradox: that there may be designs that increase trust by being less honest, and designs that increase satisfaction by making the service worse. What are we, people who design public services, to do? We have a remit and a wish to make the lives of citizens as simple and easy as possible. We also have a relationship of trust with the public, hard won and easily lost. There are already many areas of life where we as citizens and consumers lack the ability to exercise proper control over our personal information and how it’s used.
We’ve (rightly) tried to make the user experience a positive one for users, seeking to make their journey as fast, clear and easy as possible. It’s a well established principle that a citizen shouldn’t have to understand how government fits together in order to access public services. We should be aiming to create seamless experiences, providing users with services that reflect their mental models and tidying away the messier complexities behind the scenes.
There is an important caveat to all that, one to do with trust and democracy: as Richard Pope writes, just because users don’t need to know how government works, doesn’t mean we should obfuscate its workings. Or as Michael Smethurst puts it: “[as a citizen] if you hit a snaggle that doesn’t seem quite right, you should be able to drop out and see where that snaggle originates and at least attempt to change it”.
On a similar note, it’s important to tread carefully with design sleights of hand. Sincere attempts at meeting the full spectrum of the needs of users is essential. Dark patterns have no place in government services, whereas choice architecture that encourages a citizen to act in their own best interests (or the community interest) may be considered reasonable. Getting the mix right between these elements of trust and transparency on the one hand, and convenience and ease on the other, remains a careful balancing act.