The role of design in collaborative AI

New Cortex
11 min readNov 4, 2017

--

Hello! Joel here — I’m a co-founder and design lead at New Cortex and today we’re going to explore the design side of artificial intelligence, particularly in regards to what we’re working on with collaborative AI.

With collaborative AI, we can design products that are not only empowered to reach the next level of software sophistication in general but we’re able to tackle problems of scale and complexity that are simply impossible today.

Who is this post for?

This post is for everyone who concerns themselves with product and software design, regardless of whether you’ve learned anything about AI or design before.

So what is collaborative AI?

The core idea is simple — how do we create more intelligent software, for very complex tasks, that can either:

  • Help the user perform tasks they are technically capable of, but in improved ways (efficiency, accuracy, etc).
  • Help the user perform tasks they are not currently capable of (shrinking a large scale, very complex system down to a small set of actionable options based on that user’s task).

This isn’t to say that a collaboration worthy AI won’t also perform general automation duties that your users would like it to, we just won’t focus on that aspect initially.

At its simplest, we’re designing for a machine’s ability to understand large systems and data, then giving users access to that understanding.

When I say collaborative AI, I’m not necessarily referring to a bot or natural language interface (“Ok Google, what is [x]”). We’re referring to a larger concept: that with modern computing power and access to large troves of juicy data, we can build software that will make the interaction between a human and machine less like interacting with a travel agent (who’s, and more like collaborating with a co-worker with expert skills you need to do your job.

The ultimate outcome of this approach

  • Solving hard problems. We’re now able to solve problems that were more difficult than they were worth solving, and even problems that were impossible. Especially as complexity at-scale sky rockets in almost every industry.
  • Overnight experts and super-humans. By providing an interface to an expert AI, you can effectively make a beginner into an expert overnight, and an expert, superhuman.

Let’s all get on the same page real quick

Design and AI mean a lot of different things to different people, so we’ll spend a minute defining what exactly they mean for collaborative AI.

Design = Experience Design

  • Experience design (XD) is the practice of designing products, processes, services, events, omnichannel journeys, and environments with a focus placed on the quality of the user experience and culturally relevant solutions.

A core tenet of ours at New Cortex is to persistently implement design and research best practices in order to bring cutting edge AI research into real products that have a deep problem space. To do this, all XD disciplines are on deck, from the ones that are known as “screen level” (visual and interaction design, usability testing, etc.) to those that operate at deeper fundamental levels (UX strategy and research, service, persona and journey mapping, etc).

Mashup of XD deliverables (strategy docs, wireframes, user flows, information architectures…)

XD designers work very closely with multiple groups to ensure what we build is technically viable, scalable, and maintainable; that it’s something our users will fall in love with and something that will support the business’s goals and Key Performance Indicators (KPI).

Artificial Intelligence

There are many places to find definitions (and corrections) of what AI is, but in general we’re using it as the broadest umbrella term when discussing machines acting more intelligently. A deeper dive into the technologies under this umbrella, such as machine learning, deep learning, neural networks, etc., are not for this design series, but we’ll reference any meaningful design impacts as we go.

We should also define where AI is today in its capabilities versus, say, sci-fi concepts. If you thought of JARVIS from Iron Man (his sassy AI companion), then you’d be both correct and incorrect.

  • Correct in that he is the ultimate collaborative AI (unlike Ultron).
  • Incorrect in that he is what’s known as a general intelligence AI, which hasn’t been invented yet. However, we currently can leverage what’s known as narrow artificial intelligence— abilities that are confined to a specific narrow domain.

Deeper dive into defining collaborative AI

The magic begins when you start working all that data into meaningful changes to the user experience.

Why call it collaborative AI instead of [x]?

We’re choosing to focus on the term “collaborative AI” because it is more descriptive. “AI,” by itself, may be construed as AI for automation’s sake or something else. Other more design focused terms, such as “agentive technology” are good, but don’t elicit the focus on working together.

Why collaborate with an AI?

Our first product operates in the data science space where various personas (an oversimplification, but different users with distinct roles) currently perform highly technical work on systems that the human brain just isn’t equipped to fully grasp. Particularly if you take into account the sheer scale of modern computer systems and where we are quickly headed.

However, to an AI, this kind of complex technical system is home. It’s naturally comfortable with problems such as multivariable optimization:

  • Imagine 25 different dials to turn, and if you turn one, all 24 other dials move in (seemingly) random ways.
  • You happen to know that there is a perfect combination of these dials’ positions. If you get it right, a trillion dollars will fall out of the sky into your lap. (You’ll be dead from the impact, but …rich dead.)
  • The solution exists; you desperately want it to happen, and if it were 5 dials or so you’d have a shot, but it’s a problem we’re just not equipped for. An AI is.

Why is the human necessary in this problem?

In the above example, the human would have input at the beginning and then modern software is more than capable at solving the rest. (No real collaboration there). However, the problems we’re interested in for collaborative AI require the human component of intuition, inspiration, and what’s known as the programmatically inexpressible aspect of our users in the problem space. Meaning, we can’t write code to replace them, we need the strengths of both sides.

…the problems we’re interested in, for collaborative AI, require the human component of intuition, inspiration, and what’s known as the programmatically inexpressible aspect of our users in the problem space.

Isn’t that just more software? Why use the term AI?

In a broad sense we can go with Larry Tesler’s famous quote: “AI is whatever hasn’t been done yet”.

A more detailed answer is when we use the term AI we’re simply referring to software where no one explicitly designed every aspect of it. There are areas of the software, or steps in the workflow, that are entirely dynamic and driven by your AI model (very possibly in real-time at scale). Let’s refer to this as known steps vs. unknown steps.

Known steps vs. Unknown steps

Imagine launching an extremely complex piece of software to 200 trillion people across the human galactic empire (hey, I can dream about sci-fi UX) and you know the only way the software will be a success is that it’s hyper tailored to each and every user’s complex situations, history, needs, etc etc. No design team, no matter the size, could ever design for that complexity in remotely enough time to matter; especially if changes required to the software are happening faster than solutions can be created. In a few key ways, we are rapidly approaching this scenario, albeit only at tiny earth scale (I know, but we have to start somewhere).

So, if our live product was designed from the ground up to take into account an AI, or multiple specialized AIs, which sat deeply integrated into the live product and they adjusted that products user experience appropriately in real-time and at scale to not just improve the user experience but the underlying business strategy as well…you’d have a product that is not only empowered to reach the next level of software sophistication in general but you’re able to also tackle problems of scale and complexity that are simply impossible today.

This is all great you say, but where’s the rub?

How do you create an AI’s understanding of a specific domain?

To get this level of understanding, we (XD designers + technical engineers) set an AI loose on a large set of data (you need lots of data most of the time) that is particularly relevant to our problem space and work with it until it is outputting the desired outcome. This is known as training, and the AI algorithms themselves created an information “model” through this training process. However how exactly that model of understanding works is more akin to a black box. They’re not something we can open like a sketch file and understand / modify, so there will be times where the internal team or external users will wonder why the AI made that decision. This need not be because of an error, or suspected error, but ladders up directly to a very major theme of trust and collaboration that needs to be earned — it is not granted on day one.

Explainability is the commonly used term to describe the desire to understand why an AI made the decision they did. The fairly tricky part in all this is that an AI model’s accuracy over time, or in new and unexpected scenarios/context, can be unreliable or downright invalid without re-training or very careful oversight during training by a diverse group of people.

An over-simplified example of the input, processing, output dynamic.

Working with AI engineers from the very beginning is crucial to arrive at a solid understanding of what the system can do in tandem with the desired user experience. Failing to do this will result in disillusionment within the project, and potentially on the behalf of the product‘s many users if launched, which will result in a bad user experience (or one that falls short of set expectations).

As designers of systems, services, products and experiences we must deeply understand the limitations (and strengths) of different technical approaches and the humans involved. For example, when designing a natural language interface (chat bots, Alexa, etc.), one important pattern is to help the user know the limits of their requests so they aren’t disappointed and annoyed. While many patterns and approaches for collaborative AI may be standard like this, many will be very specialized, and not easy to reuse from project to project.

Now that we’ve covered all of the introductions that hopefully made some sense, let’s move onto laying the groundwork for what designing a collaborative AI will actually look like.

A new type of interaction is needed

Well, …a newer type of interaction than we’ve been designing for, as of late. Plenty of past research has explored what this kind of interaction may look like, but they just weren’t able to put it into practice the way we can now. Two example of such research is Carnegie Mellon’s 1994 article An Overview of Human-Computer Collaboration or Mixed Initiative Systems.

While we will do our best to reference these past resources, we are going to largely focus on actively applying these interaction principles to a modern product.

Moving interaction into a higher level of abstraction

To help illustrate what we mean by this, let’s reference First Person Shooters (FPS) vs. Real Time Strategy (RTS) games. In FPS games, you are in control of your own avatar, and aside from some extensions to yourself (calling in an airstri…care package of muffins) you’re executing the actions of one character in the battle.

Starcraft 2 (RTS)

Conversely, in RTS games you are generally commanding larger systems. You tell your base, or a group of units what you’d like it to do and they carry it out while you spend your cognitive resources on other similar orders.

The interaction you have with the battle in an RTS is one level of abstraction up the interaction chain compared to a FPS. You aren’t carrying each single command out, you are telling the game what you want to happen.

Similarly, this is true when collaborating with an AI. You are not responsible for knowing, finding, planning or executing each step required to complete your objective. However in an RTS, the units you’re commanding don’t come back after an order and have detailed questions or suggestions for a better order. They also can’t handle more complex orders: the game designers limited the interface to be within the bounds of what the system was capable of and what made sense from a gameplay perspective.

Actually designing for a collaborative AI

With many of the higher level concepts and guard rails in place, we’ll wrap up this introductory post with a list of of subjects and patterns that we plan to dive deeper into over the next few posts. We’ll also explore how they may change in various scenarios.

  • Trust. The importance of building and maintaining human trust in an AI, to allow a reliably performing system enough agency to help people without adding further complication.
  • Addressing needs. Considerations in applying AI to multiple types of users (known as personas) who have different roles.
  • User journeys & training. Many of the techniques and approaches used in collaborative AI can also be employed to tailor your user journey’s (a planned journey for a given persona’s growth or changes over time) to both make more effective user experiences at scale and even teach the beginner or intermediate personas skills that the AI can’t handle alone.
  • Unknown outcomes. Since AI may propose unexpected solutions, how do designers account for unknown outcomes?
  • Validation. You need the AI because the problem is complex, but how do you know the answers they’re producing are good?
  • Technical Discussion. What are modern AI’s limitations? How do you work with technical partners to craft a realistic yet amazing product, including hurdles like data collection necessary to train your AI on properly.

Update: Harvard Business Review has since published a great article mirroring this perspective: https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces.

Regarding the series that this article was intended to be a part of, defining and refining the product created some sort of temporal distortion and now it’s 2019 😶. However, today is a good day to write wouldn’t you say?

To see a small example of what this kind of collaborative AI interaction looks like head over to our Factor product page where we (very loosely) explain the way in which a data scientist interacts with our collaborative AI system.

Join in on the conversation. Are you working on similar experiences? What did we miss? Disagree with something in particular? 😱 Let us know!

@newcortexai

https://newcortex.ai

If you enjoyed this post, click on the applause button to help other people find it.

--

--

New Cortex

Collaborative AI products to manage modern complexity. We write about AI research, applied AI, CS and design. — newcortex.ai