One great thing about making requests through our personal bot, Large, is that you can multi-task. You will not be told you took too long to make a decision so now you’ll need to tell me all over again what you’re looking for.
But during this waiting period, the assistant may be multi-tasking as well, making it more difficult to keep track of the different conversations and requests within a team and among all the teams. (As mentioned in a previous post, Large was run by only a handful of people and a baby bot.) Things can get cray.
Among all the movement between conversations, one part of the funnel we don’t want to screw up is where money is involved. Say we forgot to charge something at a specific time. No biggie. We’ll just ask now. But, consistency builds trust. For the agents behind Large, consistency increases speed and efficiency.
What if someone makes a request but would like to use the company card on file which they don’t have permission to automatically use? Large will ask for purchase approval from the cardholder. Potentially, more waiting and more open threads of conversation.
Let’s streamline this
Though I understood some of the pain points, Austin, Jonathan, and our agents had the closest interactions with our customers. We got together and discussed the interactions between the agents and customers regarding payments. For some clients, Large had already adapted to how they behave. So, it would be disruptive to take them out of that flow. We needed to automate checkout, but keep it flexible enough to adapt to situations.
We did the white-board thing and ran over scenarios. I drafted up the workflow in Sketch pretty quickly (though I was recently introduced to other tools that may be better for this). Our white wall was only so big. I needed a working draft that was expandable and can be easily shared with the team. The flow grew bigger as I thought about the conversation going into different directions. It got bigger when we knew we could do better. Doing better meant fixing other parts of the conversation that eventually tied back into checkout, which was the original problem we were trying to solve. We realized we got too excited. We needed to take a step back and break things up into phases. Upon discussing with Dave and Charles, engineers on Large, they assured that we could keep pushing the experience technically. Dave suggested that we let our bot handle most of the user-facing conversations to minimize inconsistencies and errors. The agents would tell the bot what to relay back to the customer. We were ecstatic.
With all of us thinking about the product with our own expertise in mind, we landed here:
The happy path: customer agrees to purchase an item → Large asks which credit card to use → customer chooses a credit card that is pre-approved by the cardholder → Large charges the credit card, places the order, and tells the customer the order is complete and when the item will arrive.
It gets interesting when a customer chooses a credit card that he or she is not pre-approved to use, like a team card with limited permissions. Large reaches out to the cardholder for approval. If the cardholder does not respond in a set amount of time, Large lets the customer know and asks for another credit card. If this is for a lunch order, potential hangriness may occur.
What would this conversation look like?
We crafted reusable texts, similar to how there are reusable UI components and modules. The tricky part was to write them in a way so they could apply to different situations seamlessly. Refinement would come with testing out the conversation with real customers. Neutral words of acknowledgement seemed to be a good start.
I’ve put together a copy deck for collaboration and documentation purposes. Each message was broken into text and attachment, mapping back to the structure of the script we were using.
How we landed on these credit card attachments above is something I can likely share in a future post.
This is based off of my experience as a product designer building a personal assistant product with a small and fast team. Surely, there are better ways of doing this with more knowledge and resources. Open to thoughts.