How to Be a Better Designer for AI
6 Key Design Takeaways From The AI World Forum Conference
Last week I attended the AI World Forum Conference here in Toronto. I was both sad and excited to see a lack of thought leadership in design throughout the conference. I was sad because well… who likes to be left out? But I was also excited, because this is an incredible opportunity for designers to jump in and help shape our practices when it comes to designing AI-powered products.
I’m not going to write a full report of the conference here. If you’re curious to know who the speakers were and what they talked about take a peek at my twitter feed. I captured the highlights when I wasn’t distracted by the incredible Art Deco architecture of the Carlu. But what I want to write about here are my key takeaways on where, as designers, we can proactively contribute:
1. Identify where humans can add value that machines can’t.
Devin Singh from the Hospital for Sick Children talked about his experience as a physician and the amount of time he’s wasted on capturing a patient’s history and symptoms as opposed to spending that time on meaningful conversations with patients and connecting with them on an emotional level. He talked about parents of children drilling him with questions trying to understand why he came up with a diagnosis and how important it was for him to use his human intuition and emotional intelligence to make his patients and their families feel cared for.
What I learned from that story was that as a designer (and as part of my journey mapping exercise) I can’t stop at identifying where machines can solve efficiency problems and do repetitive tasks and analyze large amounts of data. I also need to be vigilant about where humans add value. It’s exciting to design an AI powered app and let the machine be the hero of the experience, but we need to make sure that we keep users at the centre of our designs and respond to their needs with AI when it makes sense and with humans when it matters the most.
2. Support human decision making when designing prediction apps.
Another story from Devin Singh was about how they’ve been testing a model that’s able to predict a child’s cardiac arrest within 4 minutes. That’s incredible! But he asked a very important question after: What next? You design an app, and it can make these amazing predictions, but then what?
Imagine this doctor receiving a prediction on their desktop screen and the child who’s about to have a cardiac arrest is just playing with his/her toys and not showing any signs of an imminent cardiac incident. Should the doctor jump up to give the child some pills? Should he say anything? What’s he supposed to do with this prediction?
Many of the areas where AI is able to make predictions are going to create new challenging scenarios for humans that we’re not used to dealing with. As designers, we need to do continuous research, testing, and simulation of these scenarios to make sure we don’t stop at designing the prediction, but that we’re also mindful of what the user will experience on receiving those predictions. Effective and empathetic design work includes thinking through the whole experience, not just the one moment of interaction.
3. Always ask “who’s missing”.
Steve Irvine of Integrate.ai in his opening keynote about building courage asked everyone in the conference to consider who’s not up on stage, because “everybody who’s going to talk to you today and tomorrow has figured something out. But if you ask yourself who’s not up there, you can be the one who makes a difference.” After the second day, I asked myself who was not up there and the answer for me was designers. (Obviously!)
But on the second day Deborah Raji from Google AI talked about the concept of Inclusion in Ethics. She also encouraged everyone to ask “Who’s missing” but this time she meant in development scenarios, where we consider a use case, choose our datasets, and design a model, so that we can reduce biases in AI’s predictions.
What I think we can do as designers is to keep asking this question when researching different user groups and segments. Is anyone missing? Have I considered every user who might be affected by this product directly or indirectly? What will I be missing by excluding this or that data point and how will that change the outcomes for my users? It’s our responsibility as designers, when working with AI, to ensure we’re creating inclusive experiences with a minimum of bias as much as possible, which means ensuring we bring in a multitude of voices from the outset.
4. Conduct qualitative user research to uncover inauthentic consumer behaviour.
Laura Mingail from Secret Location brought up a really interesting point about how consumers are often aware of how their behaviour is being monitored and used for personalization, and that a lot of the times this leads to inauthentic behaviour from users. Their interactions feed back into training the machine learning models and then they keep getting recommendations and experiences that they don’t always desire at heart.
What this means for us as user experience designers is the continued importance of qualitative user research. We need to continue to drive toward uncovering the “why”s of user behaviours and how we might be able to leverage design to build trust or encourage authentic behaviour to ensure we’re truly creating something of value.
5. Don’t implement NLP blindly.
We kept hearing over and over again that NLP is still an immature technology. There are lots of challenges, from availability of data for training purposes to labelling data and lack of context and domain knowledge that makes it incredibly difficult for chatbots and NLP services to identify the true intent and meaning of all human statements.
So what can designers do about this? We all know the hype around chatbots and voice, and how businesses are intrigued by the promise they offer of giving their businesses a cool, trendy stamp while creating delight for their customers. But ultimately we’re also responsible as designers for making sure that we remain user centric and don’t just design voice and chatbots for the sake of “cool”. As Chiel Hendriks of Google Cloud noted about the fundamental value of voice, it “unlocks technology for those who can’t use it.” Unless we’re creating an added value for businesses and users, we should not blindly implement these solutions. We can’t lose sight of what users need and we can’t stop measuring and evaluating the user experience by continuous research and user testing.
6. Don’t ask “what can AI do for users?”, but instead ask “what do users need from AI?”
This seems like an obvious point, but it was incredible to see how many of the discussions and presentations revolved around what AI is capable of as opposed to what humans need.
There is a fundamental issue with framing AI solutions before investigating needs: it goes against the principles of design thinking. As designers, we believe what Charles Kettering, Head of research for GM said, “A problem well-stated is half-solved”. We might feel like we’ve already defined a problem for the use of AI — something along the lines of ”humans have shortcomings when it comes to making decisions based on large amount of data” — but this problem statement is still too generic and doesn’t contain any specific user story or scenario.
The design thinking process allows us to develop an incredible amount of empathy with people, their environment, and their needs, and it allows us to target the root cause of their problems as opposed to a sliced solution that doesn’t fit into their everyday lives.
Much like the example Devin Singh shared about the application that predicted cardiac arrest but didn’t consider the next steps, the problem with starting the design process with solutions before we’ve investigated the need and the full context is that it becomes incredibly easy to be blindsided by the solutions and lose sight of what else is possible. There might be other technologies (or even non-technological solutions) that would cost less money and require less effort and that might even solve a bigger problem than the one we’ve developed a tunnel vision for.
It’s clear that UX design and research in the AI industry is not moving at the same pace as the technology itself is advancing and being implemented and experimented with in the real world. But that’s always been the case in technology, where design has been afterthought until it becomes a commodity. There was a time where good UX was a competitive advantage and now lack of it puts your business at risk. It’s the same story with AI powered apps where it seems like great user experiences are not a commodity yet. And that makes it an incredible time to be a designer in this space!
Let’s hear from you— any key insights around designing with AI that should be recognized early? Be sure to 👏 and share to help start the conversation.
And if you’re interested in working at the forefront of AI design implementation, take a look at our open roles and apply to join our team today.