Whenever I hear about new developments in the tech world, my initial reaction tends to be enthusiasm. And right now, many new developments in tech tend to be dominated by AI, machine learning and automation.
At its full potential, this technology has the power to assist, delight and improve experiences for people. Yet as we build things for larger and larger groups of people, the harder it gets to anticipate its effects; good or bad.
So when I first joined a data science team at the cloud accounting software company Xero, I was excited for the opportunity, even if I wasn’t completely sure what my role would be. (I remember being asked “Why does data science need a designer?” and not initially having an answer — that was something that didn’t become clear to me until much later on.)
It was with this team, when testing with customers, that we observed how brilliantly such technologies could address customer pain points.
However, we also observed that we needed to respond to all the new problems and new behaviours that the technology could create — ones we didn’t see coming.
It’s just like magic!
For someone who isn’t technical, the perception of machine learning and automation can be that it’s almost magical. We see this in our own customers when something tedious and time-consuming has been automated or anticipated for them. When they describe something as “magic”, it comes from a sense of delight — our product saved them time, and has done the work for them.
But what happens when it gets things wrong?
Automation carries high expectations of speed and accuracy
We know at Xero that it’s a win for our customers when we can automate repetitive accounting tasks, giving them more time to spend on their business. Depending on the customer’s business, this could save them hours every week.
When my team was initially testing automated accounting processes with customers, we left the details of the automation process vague and listened as our customers set their own expectations of what it would be like. We included both ideal (where everything works perfectly) and realistic scenarios (where something is missing or something goes wrong).
One of our first learnings was how difficult it was going to be to match our customers’ expectations. The test participants were excited by the prospect of capturing and transcribing information automatically and assumed that the automated process would do it both quickly and accurately.
Inevitably, these participants were left disappointed and confused when it worked slower than they expected. For some, this became a barrier for further use. The break in their expectations left them wondering if perhaps it was the process that was broken, or if they’d be better off entering details manually instead.
Delight when it works, and devastation when it doesn’t
We discovered other reasons why replacing a manual workflow with a fully automated one wasn’t always going to work smoothly. Participants talked about typically keeping a physical paper trail in case there were problems in the future. When moving onto an automated process, they explained that they would need to keep double-checking their paper records until the service proved itself to be reliable and accurate over time.
However, once the participants had used our automated process correctly, we saw higher confidence in the process, and more casual behaviour around double checking. They quickly trusted (and in some cases relied on) “the system” and its ability to get things right.
With these raised expectations came lowered tolerance and patience for problems or errors. When incorrect data came through, participants initially assumed they had done something wrong. When eventually realising the error came from the software, this quickly turned to anger and embarrassment. Our customers felt that they had been let down and their trust betrayed. In fact, trust in Xero as an entire product (not just the automated process) was damaged.
Participants described the pain of additional tasks they would need to do if mistakes like this happened — re-checking invoices and calling their clients to check credits, for example. The ramifications of one error spread far and wide.
Our design research lead described the event as “The Big One” — that like Blackpool’s famous roller coaster, when we got automation right, we inflated expectations that customers already had of Xero. When we got it wrong, it was an emotional free fall.
People want to be pilots, not just passengers
We often assume that the workflows and existing behaviours of customers could be easily overwritten if it were replaced by the option of a ‘simpler’ process. What we learned instead is the need to design alongside (and not against) their existing behaviours.
1. The mental model of some workflows extend beyond the task. Automation carries the risk of reducing the visibility customers have over the smaller details of their data. Customers who have a habit of overseeing these details have a better sense of what is happening in their business. Reviewing this data can also prompt them to do other important tasks.
2. We can’t help our customers once they’re outside of the system.
Tax compliance and customer relationships are a business’s bottom line. If something goes wrong in their software, there could be a huge impact with everything it touches, and that’s something we needed to keep in mind.
3. Customers really need to understand the level of control and visibility they have before they are willing to try new processes.
Despite the benefit of automation, they still need a sense of control. The automated process needs to exist to serve them and their needs, without replacing them. They needed to be the one piloting the plane, not just the passenger.
“We prefer to think of AI as Augmented Intelligence rather than Artificial Intelligence. Taking a human-centered approach to building relationships between AI and humans compels us to meet humans on their terms, building relationships of trust and respect, and always remembering that intelligent systems must exist in service of humanity, not the other way around.”
— Justin Massa, AI Needs to Earn Our Trust, Just Like Any Human Relationship
Obvious questions are easily forgotten
I was recently talking to a guest at a dinner party. She was a textile technician –her job was to experiment and develop different fabrics for a sportswear company. She was telling me a story about developing this super fabric with her team. She described an amazing new material that was waterproof, dustproof, wrinkle-resistant, light and extremely durable.
Yet it wasn’t until somewhere in the middle of making this fabric that somebody suddenly thought to ask,
“Hey, is this going to be comfortable?”
Which was when they all realised they were developing a super amazing fabric that had the texture of rough sandpaper.
It seems like an obvious thing to consider, but questions like this are overlooked all the time in any kind of industry. It’s so easy to get caught up on thinking we’re helping people with our innovative solutions, and then suffer the disappointment when it doesn’t work for them. What these research sessions taught me is that it’s important to always shift focus back to the customer and their problem and not get caught up in all the different ways the amazing tech can maybe solve it.
Staying connected to the human using the machine
Technology companies need to make continuous advancements to respond to needs and improve people’s lives for the better. However, I also think that we are driven to innovate first and ask questions later.
Asking questions is important because improving products may not be as simple as just automating a tedious and painful task in isolation. While we believe we are making things simpler and easier, we may also be unintentionally altering behaviours and introducing new problems for our customers.
It’s also important to establish balanced expectations with our customers. And that while in some instances customer trust is easily earned, we need to remember that it can also be easily lost if we are careless. This can be devastating to customers, their business, and to our brand as a whole.
As the designers and product teams embedded in the data, we need to keep returning to the customer’s problem. Because we know that machine learning and automation isn’t magic, and it doesn’t fix people’s problems by making them just disappear. Change happens incrementally and it takes a lot of rigour. And while the data tells us one thing, talking to actual customers gives us the perspectives we need to challenge our enthusiastic assumptions.