Agentive Technology, or How to Design AI that Works for People, Part 1

Andrea Ong
Rat's Nest
Published in
7 min readOct 11, 2017

Design+AI is a monthly meetup to explore how we design in a world augmented by AI. On the Sept 21 edition of the meetup, Chris Noessel, author of Designing Agentive Technology, joined us to shed some light on the area of designing agentive technology; specifically:

  • What is agentive tech?
  • What is narrow AI versus general AI?
  • How do designers need to modify their practice to design agentive tech?
  • Do agentive tech pose unique operational burdens?

Before the conversation got going, the group indulged me by playing a game of “what is and isn’t agentive tech”. Players considered a pair of options and identify which was agentive.

Unsurprisingly, only one person correctly identified the agentive tech. Hint: most of the pairs are both agentive to some degree, while some of the pairs contained only one example of agentive tech, and one contained a super artificial intelligence. So be warned…

What the heck is agentive technology?

I asked Chris to level-set on what he mean when he says “agentive technology”.

Agentive tech was an idea that arose from two forces that came together. On the one end, Chris had been challenged by one of his designers on his vision of the future from a designer’s perspective. On the other end, Chris started to see a pattern in his own work over the past two decades.

In short, it’s a new mode of interaction enabled by recent advances in narrow AI, in which the technology does something on behalf of the user, persistently and in a hyperpersonalized way. To understand more, we have to go back in time a bit.

What is the largest possible context for the world of interaction design?

Chris posits that the history of interaction design starts in WWII with human factors engineering. Highly trained and competent pilots were crashing planes. We learned through research that the machines were just too complicated: they represented a level of cognitive load that competed with other tasks and objectives and overwhelmed the pilots. The legacy of that research is human factors engineering.

If interaction design started with HFE, where’s the other end? When will our jobs be moot? Chris hypothesizes that “General” AI is the end of our jobs as interaction designers. Once we have something that can do what we do but is smarter and faster and can collaborate with the hive mind around the world, that’s pretty much the end for interaction design as a specialist job (if not well, almost all jobs). The question is: where are we now as we head into general AI? How close or far are we from general AI?

A new mode of interaction: outsourcing the work of achieving outcomes

Over the course of two decades, Chris has worked on various solutions that all involved outsourcing work to software in order to achieve outcomes. Based on his work as well as his personal experiences of different consumer devices, Chris started to see an emergent pattern. So, what do underwater science robot towers, automatic cat feeders, robo-investors, and Roombas have in common? Let’s look at each of these examples.

At Microsoft, he worked with the University of Washington to design underwater robots with sensors for seismic measurements. The robots would be pre-programmed to watch for certain things and then travel to different areas to collect different sorts of data, areas where humans couldn’t go with measuring tools in hand. These robots weren’t directly controlled by scientists, but did work for them.

When he travels for work, Chris’ used an automatic cat feeder to keep his cat from going hungry. That said, early edition cat feeders had a limitations: when they worked, they did, and when they didn’t, they didn’t, but you wouldn’t know either way, and either your cat would go hungry or you’d be worried about your cat going hungry. It should put food in the cat’s belly and assurances of the same into the user’s attention.

His work on robo-investor software taught him that despite knowing that the algorithm had data and reaction times that were better than human, people still wanted to see if they can beat the algorithm. People wanted room for play, for serendipity.

Roombas promised set-it-and-forget-it convenience for a household chore that many of us would rather not do. Who doesn’t love the feeling of coming home to clean floors without having had to do the work? The Roomba is not a fancy way for you to do vacuuming.

Granting agency to software

All these things started to feel like they were “of a piece”. There’s a pattern that felt like “I’m not doing the work. It’s doing the work.” Chris observed that in the past, he would design things to help people do work, build tools for people to do work, but this new type of thing was different: he was telling things how to do the work, and they would handle it from that point forward.

For example, even the humble automatic feeder wasn’t a tool to better feed his cat. Chris told it when to dispense food, told it how much food to dispense, and to continue doing so until further notice. Roomba isn’t a tool for us to push a vacuum cleaner around. We tell it when to clean, and it does. A robo-investor isn’t a tool for data and information; we tell it our financial goals, and from that point forward, it would do its best to achieve them. We can still look in and provide feedback, change up a few parameters, but it would continue operating on its own. We are granting these objects agency.

Disambiguating agentive technology

Chris saw an emergent pattern: the things he was designing and using were not automatic because automatic things didn’t need his attention at all. Think of a pacemaker as a good example of an automatic tech. If the human needs to get involved, automation has failed, and that’s not a design problem but an engineering problem.

The types of things we’re talking about are not helping us do things the ourselves. Smart assistants help us do things. Agentive tech, in contrast, does the things for us. For example, you can tell Google Keep to remind you to do a task when you’re at a location and/or at a certain time, but that’s all it does: it reminds you but doesn’t do it for you. It’s an agentive alert because it watches for your location in space and time, but it’s not like those other agents because you, the human, still have to act on the reminder. This notion of “help me do things” versus “do things for me” (or assistants versus agents) is a way for us to explore and understand a new class of interactions.

This class or pattern of interactions is marked by software that takes our directives, then implements on our behalf. As he cast about for a way to describe this class of technology, Chris looked for the adjectival form of “agent”, which turned out to be “agentive”. Agent-ive…saying it like that helps people realize it has to do with agents. In fact, in Japan, agentive technology is translated as “agent-based AI”.

Within the context of the general trajectory of general artificial intelligence that our industry is on, this pattern of agentive interactions is a “weak” kind of AI called narrow AI. It’s narrow because it’s smart in narrowly defined domains. It can’t generalize its knowledge to new domains.

Agentive technology is persistent, always-on, domain-specific narrow AI that acts on its user’s behalf in a hyper-personal way.

AI versus Narrow AI versus General AI

Invariably, the question of “what is AI” reared its head. From Chris’ vantage point, asking the question “what is AI?” is both interesting and not interesting. It’s not interesting because the term AI is too ambiguous to be useful or pragmatic. We’ve been talking about it since the 1950s and still can’t quite agree on what we want it to mean. For starters, the term “artificial” somehow implies fake, or at least made by humans. Maybe? As for the term “intelligence”, we don’t have a grasp on what that is, even after 100 years of studying it. And now we’ve combined these two notions into something we call “artificial intelligence”, begging the question — what does that even mean?

Having an adjective to the term “AI” perhaps gets us a bit closer to understanding what we’ve been doing, the implications and ramifications, and more important, our responsibilities. Thinking about in terms of “general AI” versus “narrow AI” begins to unpack some of the types of “artificial intelligence” work that we’ve been doing. We can posit that “General AI” refers to a human-like intelligence; specifically, an intelligence that can generalize from domain to domain. Not only can it learn, but it can learn across domains. Roomba is not that: it can’t generalize what it’s doing to other domains. It can only vacuum.

When we use the term “general AI”, we tend to mean human-like. Narrow AI is the stuff that comes before it; it’s an asymptotic approach to general AI or human-like intelligence. Narrow AI is the suite of technologies that are improving and getting human-like in their intelligence in a specific narrow domain.

As a designer, I take the maker’s approach: learn about the thing by making it, playing with it, testing its limits. To channel Richard Feynman, “What I cannot create, I do not understand.” While it’s fun in a hurts-my-brain way to participate in armchair philosophy about a monolithic AI, its breadth and non-specificity leaves me struggling to answer the “so what does it mean for me” and “now what do I do” questions. It also leaves me no closer to clarifying my own thoughts on the ethical implications of technology. Framing the mission in terms of narrow AI, however, does help me get down to where the rubber meets the road and to begin to understand what it really means to use “artificial intelligence” to solve human problems.

In Part 2, we’ll explore how our design practice needs to adapt when designing agentive technology.

References

http://rosenfeldmedia.com/books/designing-agentive-technology/
https://medium.com/@christophernoessel/ani-design-skills-f0af22360570

https://articles.uie.com/new-technologies-to-consider-for-interaction/

Disclaimer: The ideas and opinions expressed in this post are my synthesis of Chris Noessel’s session at a Design and AI meetup hosted by Normative, where I work. My views are not necessarily those held by Normative nor by Chris Noessel, and any technical errors or omissions are mine alone.

--

--

Andrea Ong
Rat's Nest

Design leader, mess untangler, problem framer, pathfinder, and coach. Photographer at scubagirl.ca, sailor. Full-time misfit. Likes the arcane.