A three part series on the mindset, methods and skills designers will need as we move into the era of creating human and machine relationships.
Part 2: Methods
Now that you have that big vision in mind, I’m going to flip you back to reality. But before we take a look at how AI is being used today, let’s clarify what we’re talking about first.
Artificial Intelligence is…
The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Machine Learning is a subset of AI.
Augmented Intelligence is…
What AI does for humans — makes us smarter, faster, more informed, more focused at the jobs we need to do.
I point this out up front because Augmented Intelligence is what we mean when we say “AI” at IBM — the ways in which we are making our clients better at what they do—and it describes 99% of the AI you see in the world today.
What AI looks like today, and for the next 5–10 years
Rather than break it up into use cases or technologies, I find it more helpful to describe the types of situations that AI is good at augmenting today.
- When things are super personalized…
Netflix is using machine learning to dynamically personalize their experience by tracking user behaviors, finding patterns and preferences, and using that data to serve more relevant content. Specifically — artwork that will appeal specifically to that user.
Artwork Personalization at Netflix
Artwork is the first instance of personalizing not just what we recommend but also how we recommend.
2. When systems are big and complex…
Manufacturing is a great example of this. Think of how many steps it takes for an automated production line make a car. All of those steps are made up of trackable data points that can be quickly tracked and used to notify people when something goes wrong — in the form of an outlier on the typical data patterns. This can lead to big savings when problems are headed off at the pass or predicted in the future.
3. When a task can be described in a series of consistent steps…
If you can describe to a computer the tasks it needs to perform to complete a job, you can probably create a machine learning model that will allow the system to do that job for you. Adobe is applying that thinking to their products, adding a conversational assistant to perform common Photoshop tasks for their users.
4. When imagery can be mined or mapped…
Images and video are rich with data. This takes the neural networks of deep learning for a system to learn to recognize images, sounds and patterns, but the results are unlike anything we’ve been able to do before. I share this example not because you see it used often, but to demonstrate just how far it can be taken:
5. When circumstances can be predicted…
If we know that a situation happens the same way over and over again, this is probably a moment that we can teach AI to augment. It’s when we try to use AI in situations that have so many variables there’s simply no way to account for every potential outcome that we run into problems.
Driverless vehicles are a great example of this. For the most part, we know what actions people take in a car — stop at the stop sign. Stay between the lines. Check your blind spot before changing lanes. There’s a lot we can account for, but the minute conditions change — like weather, or construction, or something falls into the road — we start to run into problems. This is why you don’t see driverless vehicles on the roads yet.
6. When individual pieces of machine learning add up to larger AI experiences…
Most AI that we experience today is happening in the background where we don’t even realize it. It’s mainly machine learning being used to improve IT systems, track patterns, sort big data, and alert system administrators to outliers (what you’ll hear referred to as “providing insights”). That’s because right now AI is good at doing small things like recognizing words or images. Interacting using conversational, natural language. Optimizing processes and expediting outcomes.
AI designers and developers are looking for ways to combine these individual skills into jaw dropping new user experiences. Personal assistants are a good example of how much we want AI to be amazing, how hard people are trying, and how limited we are — for the time being. This video is something Mark Zuckerberg made for fun. This is NOT a new Facebook product. I cite it here because it’s a well produced example of tying together basic AI services. This demo uses a dialog builder (such as Watson Assistant), visual recognition (machine learning) and has IoT connectivity.
Overall, there’s one easy phrase you can ask yourself if you’re trying to figure out if you can use AI in your product:
Do you have the data and/or connections to provide better insights, with more confidence, faster than humanly possible?
If yes, then you probably have a good use case to pursue.
Designing for AI scenarios
First off, designers need to know one very important thing about AI before they even get started and that’s the fact that
AI is just another tool.
It’s not an unfathomable science or a newly discovered color that only certain people can see. It’s just like the toolbar in Photoshop in that it gives us some new possibilities to work with.
To get good at wielding this new tool, designers will need to become experts at two things to create solid solutions for any AI scenario:
- Classic Design Thinking
- Design Thinking for AI
We’ve been working at IBM on a few new exercises designers can add to their Design Thinking toolkit to strategically create user focused, intentional AI experiences.
We started this process by understanding what needs to happen inside the AI’s mind in order to think, reason, understand, learn and communicate in a human-like way. This research led to what we call the Human-to-Machine Communication Model. You can read all about it on IBM Developer Works if you’re interested in the nuts and bolts of Design Thinking for AI.
This model led to an exercise we use to create an “AI Toolkit”. The purpose is to start with user intents, then identify all the potential elements the system could use deliver cognitive outcomes.
The result of this exercise is an AI Hypothesis for each intent (or hill as we call it at IBM) that describes how your intent could be solved using AI. Then we return to classic Design Thinking ideation and journey maps to start envisioning the new experience.
There are all kinds of ways to use the Communication Model and concepts like it to create other valuable exercises for designing AI. The AI Toolkit exercise is just one approach we’ve found that works for our teams. We’re sure to see an increasing number of perspectives on AI Design Practices being published which will continue to better define the role and skillsets of designers in the future.
Have thoughts or opinions on the role of designers in the future and the types of skills they’ll need? I’d love to hear what you’re discovering, please leave your comments below!
Jennifer Sukis is a Watson AI Practices Design Principal at IBM based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.