Agentive technology with Chris Noessel

A transcript of Episode 121 of UX Podcast. James Royal-Lawson and Per Axbom talk to Chris Noessel about the concept of agentive technology; computers doing things on our behalf.

Chris was interviewed on UX Podcast in February 2016. He is currently working on his new book The Dawn of Agentive Technology and will be speaking at UXLx this May.



Per: Hello dearest listener. This is UX Podcast hosted by me Per Axbom.

James: And me James Royal-Lawson.

Per: We’re balancing business, technology and users every other Friday from Stockholm, Sweden.

James: You sang that a little bit this week.

Per: Yeah! I’m in my good mood.

James: All right. Yeah, because all the other 120 episodes, we’ve been in bad moods. So …

Per: Wow. Everything is black and white, isn’t it?

James: Yeah, nothing on a scale of one to ten.

Per: So some of our listeners may be aware. We’ve partnered again with the UXLx Conference taking place in Lisbon, Portugal, May 24th to 27th. It’s now a four-day conference with some top-notch speakers and workshops and last time we interviewed Melissa Perri about UX and product management and today, it’s time to interview Chris Noessel.

James: Yeah. And when — I think when this episode is going to be published, we will be about 80 days away from UXLx.

Per: Yeah, so time to get that ticket. So who’s Chris Noessel? We’ve interviewed him before.

James: We’ve interviewed him on two occasions. The first time was way back in October 2012 which was episode 25. That was in connection with — when his book Make It So had just been released.

Per: He co-authored that with Nathan Shedroff.

James: That’s right. Yeah.

Per: Yeah.

James: And that was a wonderful little chat, geeky sci-fi chat.

Per: Very nerdy chat that is.

James: Yeah.

Per: I loved it.

James: I loved it too. The second time we talked to him was about a year and a half ago in …

Per: Equally nerdy.

James: Yeah. It was November 2014, episode 86, Redesigning Star Wars. Now this was when Chris had come to Stockholm to do a test run or first ever run of his workshop called Redesigning Star Wars where we made use of science fiction as a tool to help us in solving design problems in the real world. This was like a spinoff on the back of — well, everybody says it’s a thread, isn’t it? It’s connected with the research and the work with Make It So that led to this idea of doing a workshop. I guess the journey is continuing for Chris.

Per: Yeah, he has moved on. He’s talking about stuff that I haven’t really heard about before. Agentive technology, I’m not even sure how you pronounce it.

James: I think it’s a phrase he has coined himself. But it’s — from what I’ve gathered, it has to do with artificial intelligence.

Per: Yeah.

James: Or at least the weak AI or the narrow AI.

Per: Yeah.

James: Which most things we come into — everything we come into contact with, I believe it is. But I don’t really know. I’ve just — I just read a bit before …

Per: If Chris Noessel is talking about it, it’s really important and it’s something to do with the future because — I want to mention, if you really want to understand how much of a backbone in the UX community Chris Noessel is, he was actually also more recently involved in co-authoring the fourth edition of the classic UX bible About Face, which is the go-to book for reading about stuff like personas. Let’s hear what he has to say about agentive technology.


Per: I don’t know where to start off actually. I haven’t — any ideas, James? I think we’ve started talking about in the intro with agentive technology and where you’ve moved along. Well, you’ve been so many years in the business and you’ve moved into so many different wide areas of UX. But you’re sort of mostly famous from when we’ve talked to you about — with the sci-fi interfaces and the Make It So book and the Redesigning Star Wars workshop that we enjoyed so much. Now you’re talking about something called “agentive technology”. Am I even saying that right?

Chris: Well, folks are saying it both ways and it is sort of this ancient English adjective that I’m trying to rescue for good reasons. I say it agentive because the core word is the “agent” or the “agency” that you’re sort of granting.

Per: OK.

Chris: But to your point actually, it does have a connection to sci-fi and some of the work that I’ve done in sci-fi. I first began to think about this because of a scene from Firefly.

Per: OK.

Chris: So you guys are very familiar with the show and in the pilot episode — and actually not even just the pilot episode. The very first scene of the pilot episode, Mal is defending Serenity Valley from incoming ships and he hops up on to an anti-aircraft weapon.

He grabs the controls and he has this heads-up display in front of him and on that heads-up display, there are two different crosshairs, two different reticules. One reticule, it’s easy to tell just by looking at it. It’s telling you where the weapon will fire. Then the second one is obviously computer-generated and is telling Mal where the bad guy is and as the audience, we understand what his job is. It is to get reticule one onto reticule two and then pull the trigger. That certainly works from an audience perspective.

But from a real world perspective, you have to ask. If this weapon knows where the bad guy is, why is it waiting on Mal to move it, right? If it knows exactly where he is, it should be doing the moving and that thought had stuck with me since I saw that pilot and I was reviewing Firefly.

Since then, I’ve been taking a look at that notion of when computers do things for us on our behalf. When do we grant them agency to do the things that we don’t want to do or aren’t really good at doing or that we’ve — yeah, just never done before?

Per: So it’s different from artificial intelligence.

Chris: It’s actually really interesting because it’s — there are three sort of broad categories of artificial intelligence that are talked about in the literature. The first one that most people think about when they hear AI is, oh, a machine intelligence that’s just like a person’s intelligence. That category is called artificial general intelligence.

James: So this would be like the HAL kind of AI that we know from sci-fi.

Chris: Yes. HAL is very much the best model for artificial general intelligence with the exception that he’s evil.

James: Well, yeah.

Chris: But yeah, that’s like — HAL is a little slow, slower-speaking but otherwise is pretty much just another member of the crew with access to cameras everywhere and the microphones everywhere and controls certain parts of the ship.

When we give the task of creating a new artificial intelligence to an artificial general intelligence, the notion is that they will build something much bigger, much better than themselves and over time, that AI will build a new AI and eventually will get to something that is so far out there that its intelligence will be to our intelligence what our intelligence is to an ant, where it’s thinking things that we don’t even have the biology to think.

James: Now I think of the classic scenes from The Hitchhiker’s Guide to the Galaxy, Douglas Adams. What Deep Thoughts says that — talking about the machine that comes after Deep Thoughts because he was not worthy enough of being — of talking about it because he’s the one that’s going to design it. But he won’t actually be it. He’s so much more clever.

Chris: Yeah, exactly, exactly. That one — and you can kind of think of the off-screen version of Samantha from the movie Her, a category that we call artificial super intelligences. In fact, sci-fi refers to the onset of artificial super intelligence as the singularity because we really don’t know what life looks like after that point.

We have really good concepts of what life is like with humans in the world, so the notion of an artificial intelligence that is general just requires us to imagine that embodying a car or on our phone or on a television. But super intelligence is by definition something we can’t comprehend.

So that’s the second category and probably a genuinely scary category of artificial intelligence. But there’s a third category but we have to actually come back to our own time. That’s called artificial narrow intelligence. A narrow intelligence is distinguished as something that’s very smart, something that can learn and infer but that can’t generalise its knowledge.

If you will remember the movie War Games, the big computer there is called “WOPR” and the big plot spoiler is that WOPR learns to infer from the game of tic-tac-toe, which is a game that no one can win and generalises that knowledge for his primary task of global thermonuclear war because his job is to manage that, the weapons and the responses.

James: Yeah. It’s kind of like — it’s when you do like a — oh, A plus B equals C. I mean you’re inferring a third thing from two other component parts, directly equal to the thing that you’re imagining.

Chris: Exactly. So artificial narrow intelligence can’t do that and that’s why it’s not called general because it can’t generalise its information. So artificial narrow intelligence is in the world today and it’s just really — it’s the evolution of really smart software and I break that down into sort of two parts. There’s an assistive narrow intelligence, which are things that help you in real time with a task. TurboTax is a pretty good real world example of that where you have a task to fill out your taxes and you have an assistant who will help you answer your questions, guide you to the next step, deal with some of the tricky problems that you’ve got.

Then there is the other category which is doing things on your behalf, while your attention is elsewhere. A couple of sort of First World examples usually help people get it. The first is the Roomba Vacuum Cleaner. Every vacuum that happened before Roomba was focused on the typical problems of interaction design. Let’s make the handle ergonomic. Where do we put the switch to turn the thing on? Let’s put a light in the front so that you can see the thing that you’re sweeping. How do I lock and unlock the handle?

All those are fine because it’s meant to engage a human who is doing that task. The Roomba is an agentive solution. It tries to say, “Well, how can we get this goal met for you without you actually having to do it, while you’re paying attention to other things?”

So with the Roomba, you set it up. Set it on its little charging cradle and then it goes about cleaning the floors for you at whatever specified time you’ve asked it to. It will complain if it gets stuck in a corner or if it doesn’t know where — what to do or if it can’t find the charging stations. But I would say about 85 percent of the time, my Roomba can find its charging station and work perfectly well.

Then my only maintenance is to come in and empty that dust bin. That’s different. If you can think about the Roomba and the Dyson vacuum cleaner for instance as being similar things, but they’re clearly not the same. In a similar sense, there’s a camera and it’s actually out of Sweden there with you guys called the Get Narrative. Are you familiar with it?

Per: Yeah. I have one.

Chris: I love it. In a similar sense, this is a camera unlike any camera that has come before it, right? From the early days of cameras, all the way up until the fancy DSLRs that we have today, right? You have all these controls to either try and make it super easy for a human to take a photo or it has a lot of controls to make it powerful for a human who wants a lot of control. That human is still necessarily part of the equation.

Contrast that with the Narrative and the Narrative, you just unplug it. As long as it sees light, it takes a picture every 30 seconds of whatever is in front of its little lens. Then at the end of the day, those two very powerful algorithms work. The camera sends all of its photos up to the cloud. It breaks it into the scenes across the day and picks the most representative or best photo from each of those scenes and sends it to your phone to say, “Hey, what do you want to do with these?”

That’s a camera unlike any other camera before whereas the earlier camera for all the cool tech that can be applied to it, it’s still assistive. It still has you with your eye in the view finder and the Narrative just says, “Well, how do I get them great pictures even if they’re not taking pictures?” They come up with this beautiful, beautiful device.

So the long answer to your question was that AI is a big, broad spectrum but the ones that are in the world today can be thought of — and the things that most people think of are assistive and I’m turning my camera towards that other category of thing called agentive, partially because I think it’s new and that’s exciting. But partially because I think there’s — time is a limited resource for humans and if you have some technology that helps you take advantage of your time better, then it will allow us to do a lot more with our lives.

James: Where’s the line then between say algorithms and AI or agentive technology? Algorithm is a lot of what we would maybe consider to be — well, artificial intelligence. Probably at the end there, just a simple algorithm. There’s no self-learning. There’s no kind of self-improvement I guess. Not directly.

Chris: When I think of algorithms, I think of basic functions inside of a program. It’s a building block and to speak of agentive technologies, to speak of not just a brick but, “OK, what’s the building that that brick has made? How does it function? How do we use that building?” and an algorithm — certainly when we think about it, we think of them being super smart but I think a basic “hello world” algorithm is — probably counts under that definition.

James: Yeah. I think what I was thinking of here now is say Google search, which we talk about as being the search algorithm. We have that box on Google. We write something in and an answer comes out or we speak it. You know, Siri-like. But Google are trying to be kind of smarter and smarter by giving you answers before you go off to another webpage. The knowledge graph and those kind of instant answers you see in the results.

So I think that’s what started to make me think about whether that’s an algorithm still or whether it’s going to cross into something else.

Chris: That’s a great example in fact because I do think that Google search is incredibly powerful for being such a simple thing. As you mentioned, a box. You type a phrase into it and click Enter. It gets it right so often and we know because we use language and understand how complicated that is for a computer to understand. How well it is that Google is doing its job.

But if I had to put on — in the narrow artificial intelligence spectrum, I would say it’s certainly assistive. You have a question. You bring that question to Google. Google answers that question really well.

Google does have another product called Google Now which I would categorise as much more agentive. Google Now will do things like watch your calendar, watch your emails and let you know things about, hey, if you want to make it to that concert on time that I see in your calendar, I’m also watching the traffic and can see that things are starting to back up and you need to leave in the next five minutes.

So it will buzz you and let you know that. That to me is much more of an agent who’s working on your behalf. I’ve had to try and go out and find some — or craft some sort of definition to help distinguish things and one of the things that makes sense to me to distinguish them is that an agent necessarily watches a data stream at the very minimum, like a clock, and then watches for triggers.

When this happens, then that happens. Then I will do my thing for you and then there are a whole lot of other subsequent things like, oh, I need you to help me resolve this problem because I’m not 100 percent confident in it or I’ve genuinely hit a problem that I can no longer resolve.

I think more powerful agents watch more data streams and have more sort of complicated behaviours and triggers. But Google Now is a really great example because it watches a lot of things at once and I think a lot of people are experiencing it quite directly.

Per: Right.

James: I guess here then, that’s — Google Now is kind of a pre-emptive version of search or data. You search something. You’re the one that’s taking the action. You’re going in there to search and getting an answer whereas with Google Now, as what you said, it’s a combination of several data streams, which then is used to help do something or present something to us before we realise we need it.

Chris: Yeah, which is new, right? The only place we’ve been able to turn for that I think in the past are twofold. One, humans and in fact when I think about what is the right model for an agentive technology, I think that a butler or a valet and that when I look for the analogue to an agent prior to the advent of this kind of technology, there are only really two places for it. The first is humans, so humans who are able to think about someone else and say, “Oh, I think they’re going to be interested in this and I should contact them and let them know.”

The first place my brain goes maybe because of Downtown Abbey is the butler or a valet, right? Who is working behind the scenes in order to make things happen the way they’re supposed to happen and approach the house — what do they call it? I don’t want to call them master. That sounds so horrible.

James: Yeah.

Per: You’re British, James. You should know this.

James: Yeah. But it would be the master of the house or the lady of the house. I mean if we’re talking — if it’s bringing ourselves back to the Downtown Abbey stuff, then it would be the lady of the house, the master of the house.

Chris: It’s awkward and weird to use that word but we will use it. You know, come to the master of the house with problems or to ask if there’s anything else that’s needed. The second place is simply — then they’re kind of not the right place but certainly we have people who have extended — this one is — I’ve just thought this one. So I’m not as eloquent with it, but astronomers for instance will have worked out the routines for when you can expect planets to rise on the horizon and set on the horizon or certain celestial events.

So that’s not really coming to find you but it’s certainly setting out in the future, so that you don’t have to work it out yourself. Reference materials are kind of that thing. But even after I say it, I want to come down off of it because that’s not the same thing, right? If you have a reference material, that’s quite assisted.

It’s only useful while you’re paying attention to it. So then that takes us back to sort of the butler as the great model for what an agent was in prior times.

There might be one other and this might come from my taxes background. But I also think of horses who have been broken to be able to ride. I recall fondly of stories where a horse — a rider could fall asleep on a horse and the horse would still take them home for instance or in movies, you might have like a really drunk cowboy lazily climb onto the horse. The horse would head home. That’s a nice piece of agentive service, right?

The horse knows where you need to go and that’s OK. I got it. You can sleep the rest of your drunk off.

Per: So it’s helping you when you’re not actually doing — having to do something yourself.

Chris: Yes, exactly and that’s the power of it.

James: I wonder if the — one of my favourite things at the moment is the Spotify Discover Weekly which was Spotify’s player list that it generates every Monday morning of new songs for you to listen to. I think this probably does fit in with agentive technology because what Spotify is doing here is looking at not only my play data, my song data. But it’s also looking at their entire data for all people using Spotify and looking for common patterns and saying, OK, well, this group of people, they listen to these bands that you do. But they also listen to this other band that you never listen to. How about you try that? So it’s serving me with songs that it’s guessing that I will like. So I don’t bother — I don’t have to kind of look myself for new songs. It’s really, really good.

Chris: Yeah, and really powerful. I fully agree that that is a great piece of agentive tech and it reminds me of another of my favourites. Have you heard of Chef Watson?

James: No.

Chris: So Watson is IBM’s big AI. They’ve been working on it for decades and right after it won Jeopardy, they were asking themselves, “Well, what do we do next with this tech?” So they spun it off in a number of different directions and the one that I know of that has gone public is called Chef Watson.

It consists of a couple of parts. The first part is Watson just goes out on the internet, finds every recipe that it can find and then takes note of it and does a bit of semantic analysis around it.

What they’re able to do then is what a lot of other different recipe databases have been able to do, which is I say I have a blood orange. What can I make with this?

But then it does a couple of things that none others before can do. It will certainly give you recipes but it will let you optimise them for Eastern or Western cooking, specifically which means that in the West, we tend to maximise the presence of particular chemical compounds and in East, they tend to balance those chemical compounds.

So you can say, sure, find me a recipe for the blood orange. But let’s try an Eastern cooking algorithm in order to find those recipes. It kind of sounds simple because it might be predefined within the recipe but it’s not true.

It can actually tell you that oh, some things you think are Western are actually more Eastern in style and some things that are more Eastern style are actually more Western. But then the really cool thing that it does is it drifts those recipes. That’s my verb, not theirs. But by drifting, they say, OK, well the recipe that I — or they, I’m sorry. It’s an — I mean it’s an AI. Let’s talk about it as it.

It says, well, the recipe that I’m drawing from says that you should use hazelnuts. Oh, that’s a category of nut with these particular chemical compounds. Let’s go find another nut with those same chemical compounds. Let’s try that same recipe with pine nuts or walnuts or garbanzo beans. It can sort of — you can turn that dial from — randomise it just a little bit or drift it a whole lot to give me something super crazy.

The cherry on top of it all is that Watson looks at the new recipe that it has created and guarantees that it has never delivered that recipe before.

Per: Wow.

Chris: And then you can sort of — there’s a whole community about people who are trying these recipes. I’ve tried a couple and then when they find really cool new solutions, new recipes, they then share them back and they do a social approval because of course Watson can’t taste his randomised …

James: Yeah. He has got to look for a data stream to confirm its hypothesis that it thinks that this one is going to taste good.

Per: Yeah.

Chris: Exactly. But in the whole, what it’s doing is it’s helping people find new food. That to me is just amazing, similarly to the way that Spotify is helping you find new music.

Per: We’re also becoming tools for the AI because we’re like the tasters of past, tasting stuff for the AI so that it can learn. So we’re testing if it’s poisonous or not.

Chris: Exactly.

Per: Wow.

James: Yeah, there’s a potential moral hazard there. It’s kind of like, well, we will poison those 300 people. So then we know that one doesn’t work. But these 300, they’re going to get the good stuff because we know it didn’t kill them.

Chris: It’s very similar to the trolley problem that is facing another piece of agentive technology on the horizon which is the self-driving car. You guys, have you read the trolley problem?

Per: Not familiar with that phrase, “trolley problem,” no.

Chris: So the trolley problem is a very interesting one where the notion is that you’re standing on a rail track with a switch in hand and an out of control trolley is rolling down the track and you have the opportunity to switch it, to switch the trolley to another track where it might possibly hit one other person or a dog or a bus full of people. The question is, is it ethical to make the switch? There are all sorts of permutations on this problem about — well, is it OK if it’s a young child on one and an octogenarian on the other? Do you prioritise it based on age or do you prioritise it on — is it OK to let the trolley kill someone versus actually causing the trolley to kill someone?

Per: Right.

Chris: It’s a great and very fascinating bit of ethics underneath that. But it’s being tested in the real world as we talk and think about driverless cars. When the driverless car is on the road and a dog jumps out in front of that car, it has got choices and some algorithms that — algorithmic choices that it has to make. Is it going to kill a dog? Is it going to swerve at the risk of the passenger? Does it swerve into a pedestrian? It has got all these trade-offs that it has to make instantly and so this notion of the trolley problem, which was sort of ethics and academic is now very real for the people who have to program. What will the car do in that instance?

Per: Right. And how much should you leave up to the user as well? I mean is that a setting when you get into the car? If we’re more than one person in the car, do this. If it’s only me, then kill me first over someone else and if it’s a dog, run over it. If it’s a cat, don’t do it. Would I be able to control that as a user if it’s my car?

James: You’re too slow Per. You’re never going to be able to because your reaction time is going to basically contaminate that entire process to make it a disaster.

Per: I’m saying that my setting is — I’m setting that before I get into the car, before I drive.

James: Oh, you mean like that.

Per: So that setting is when I buy the car, how do you want to — like those are extras for the car.

James: But that’s like getting on board a spaceship and saying, “I want evil HAL and not good HAL.”

Per: Yeah, exactly. And what’s evil and good?

James: Yeah, it’s personal …

Per: That’s really interesting, yeah.

Chris: And it’s an interesting question not just for us as writers but as a culture. What do we — what will we allow Google to write? How are we comfortable with that algorithm? And like how does the notion of insurance respond to that algorithm? Like, we have to ask very awkward questions about, “Well, what do we value and what do we want to put into code that we value as a culture?” If we have to make real life trade-offs between A, B and C and it’s — those are ugly things. We’ve always been able to deal with them in the abstract or look back and reflect on them. But to actually encode them is a new thing for us.

James: I think — yeah, I think isn’t — it’s part of the solution here maybe that with — well, I’m not going to say infinite amount of data. But with a large amount of data, that you can get the number of scenarios where something nasty will happen down to just small likelihood of occurrence, that the question vanishes.

If we’re looking at some of the data that Google have been releasing today with their cars, they’ve gotten so good at predicting situations that they can — with their radars and sensors and stuff, they can maybe see a car or say like a motorbike coming from a certain direction at a certain speed and they can see it quite a long way away, a lot farther away than you as a driver would ever be able to.

They slow down in advance because they can see it coming and avoid the situation. So it never comes to the point of, “Do we crash into the cyclist or the bus?” because well, we actually know there’s going to — there would be an accident here. So we do something about it.

Chris: Yeah. Ideally, that’s what — where we’re going to hit where — especially if the motorcycle is being controlled by un-similar AI. They just talk to each other, right? Or they talk to the traffic conductor AI and the traffic conductor says, “Yeah, you need to slow down,” right? That would be your most ideal mode.

But speaking here as I am in California, I can tell you that an earthquake will up into those decisions very quickly or any natural disaster where suddenly the AI’s best plan is now forwarded and it’s up to the individual atomic objects in the system to decide what to do and those are going to be ugly and sudden and we’re going to have lots of data around it to analyse it in retrospect. But in the moment, it won’t matter what our best intentions were.

Per: Yeah.

James: So if you think about the challenges for us as designers and how we can — what we can do about this, what are those? What’s facing us directly?

Per: Well, I think the first thing is that we are going to have to shift the scenarios of use because the notion of designing for a tool as we’ve codified over the last say 30 years are things like affordances, constraints. What are the outputs? How do we signal the state of the system to the user such that they know what their next step is? And accomplishing a task.

That’s all well and good and we will even be using some of that. But more over, with an agent you have an agent doing most of the work. So your work is going to be on, well, how do we convey what this particular agent can do? How do we find out the way that user wants the agent to do its work? That can be explicit like the settings that we were talking about moments before. Which of the kill algorithms do I pick for this ride?

But it also implies some inferences like you talked about with Spotify. Spotify just watches you and then it infers some things. Now the inference is never going to be 100 percent correct. So you also need mechanisms to be able to say, oh, I see your recommendations. Let me either decline or decline until you why on declining, so that you can get smarter about things that you will recommend in the future.

So there’s a whole set of well, where do we get those preferences from? Then there’s of course launching and pre-launching an agent especially on things like the car or for medicine. You kind of — or money. You kind of want to let it run for a little while and see if you trust its results. I’ve called the pattern a hood to look under, right?

If you imagine horseless carriages at first, people were like, “Well, what is this demon device?” and I’m going to look in and see if I trust it before I actually just plan my day around being able to get in it, start it up and drive away. In a similar sense, we’re going to have to do the same thing with agents as well. Let me see what it would do with my money and then say, “OK, I think we’ve got it right now. I will pull the trigger and I will actually give you the agency to do some investment or take the remaining change from any purchase that I make and apply it to savings.”

Then once an agent is up and running, you’ve got a whole new slew of things. You’ve got to know that it’s running because if I came home and I got the sense — and my Roomba was just sitting there, I would wonder, “Well, did it do its job today or not?” and certainly you can look at the effects of an agent. Is my floor clean? But in many cases or the super long term, you’re not going to see those direct effects. So we need to signal that sense that it’s working.

You have new use cases where the algorithm isn’t 100 percent confident in what it’s about to do and for things like your health or your money, you’re going to want it to interrupt you just like a good butler and say, hey, we’ve got a choice to make here. Here’s what I recommend. But really, I need to know what you want to do.

Then there are those problems when the Roomba gets stuck and our agents are just going to wind up sort of crashing and we have to have a way for the user to gracefully degrade that. There’s a lot of talk right now with the Google driverless cars about — or, I’m sorry, driverless cars in general about whether or not the driver seat faces forward or back. Back is safer. And if you’re not driving the car, why do you need to face forward? But in the case of a graceful degradation, you need to minimise the amount of time that it takes to literally just grab the wheel and steer it into a safe place. So it’s a bit of a challenge.

Per: Plus you don’t want to feel sick.

Chris: Yeah, exactly, although I ride BART a lot and riding backwards. I get used to it fairly quickly. And then there are a few other cases where you want to pause or restart that agent. There’s even — one of the big problems about agents that keep working is the problem and the morbid problem of death. When their user dies, how do they know? How do you not get notifications that a friend of yours who has died, it’s their birthday? Because that hits the remaining humans pretty hard.

So there are some new scenarios that we have to attend to around telling that agent when to infer that it needs to stop and even — well, what does it do with the resources that it has been granted in that case?

So all of those to me feel new and really particular to agents and give me a sort of — a designer, a lot of to sink my teeth into and the number of times I’ve been able to work on projects with this new technology. I’ve really enjoyed the novelty of those problems.

Per: Wow.

James: Yeah, it’s interesting because you will have to take affordance to a new level but you also have to take trust to a new level whereas when we’re looking at traditional technology, there — you build up trust in order to dare to take an action whereas hey, you’ve got to build up trust in order to let that action go and let the agent take care of it. On top of that, we have the — what you’ve just talked about with death and so on. That comes into the security aspect of this — well, trust and security. Can I trust the agent with not just my command but also maybe Per’s command in the event of something happening to me?

Per: It does sound like there’s a whole new school of learning here that we have to really pay attention to. But are you saying that this is something that UXers really need to get on top of and learn more about now?

Chris: Yes. I believe so for a couple of reasons. One is that examples of agentive tech are popping up in many places. Like obviously once, I sort of put these goggles on and I started to look for them. I’m able to identify them certainly more than other folks and it’s cropping up more. I think the reason it’s cropping up more is it’s the natural evolution of user-centred design, right? We’ve been working for decades to get the maximum effects for our users with the minimum amount of effort and well, having the product or the service do the thing accomplishes that as best as possible.

There is, as we talked about earlier, when we were talking about the categories of artificial intelligence, one other category which is assistive, which is another type of AI, narrow AI, that designers may be working with. That is different. That does entail a lot of what we learned before. What are the affordances, the constraints, the controls, the mappings, all those sorts of things? I think there are four categories where assistive is the right technology. There’s your job. There should be a trolley problem for this where you’ve heard of people who get a job and then farm that job out overseas for cheaper. They just literally sit back and rake in the money.

Per: Yes.

Chris: And I don’t think anyone says that that’s an ethical way to handle it. So I believe that your job is going to be something where you need an assistive technology, whether you’re a surgeon or a baseball pitcher or an interaction designer. There’s art which is akin, right? Which is something that you want to be doing and that you want to be doing well, do you don’t want to push that off to an agent. We certainly have skills acquisition where if a kid is learning to drive for instance, if we still have driving cars, or if you’re trying to learn a new sport or learn a new recipe. Those are going to want to be assisted because it’s helping you be — it’s helping you by being a scaffold to the new knowledge that you want.

Oh, rats! I said four. I’m being busted because I can’t remember the fourth one. It’s art, it’s your job, it’s skills acquisition. Oh, I will have to look it up. But anyway, so your question was, do designers — actually, I won’t tell you and then people will be really interested in seeing me talk and keep people wanting more.

But I think in the future if artificial narrow intelligence is going to be the default for which we design, then for every project, you’re going to have to decide, “Should this be an assistive technology? Should this be an agentive technology?” and then go.

But that said, I think we’ve got a lot of really good tools for designing assistive tech, assistive AI. Our new challenge as an industry is to figure out what to do with this other narrow artificial intelligence.

Per: Yeah, I recently saw a TED Talk by Oscar Schwartz about poetry created by computers and the dilemma there was that people could be moved by a piece of poetry that was really well-articulated. But when they found out that it was created by a computer, they were creeped out. So I think that must be one of the big challenges in all this, that — making it — well, not creep you out in the sense that it’s — you’re thinking that you’re interacting with a human because it’s that good. But then it does something and you realise, oh my god, I’m not talking to a human.

You need to be upfront about what you’re doing and people need to understand and get into it — this with an open mind probably.

Chris: Well, even that you don’t want something that can’t — you don’t want to connect to something that can’t really connect. It’s the philosophical zombie problem, right? If it looks and behaves like a problem, passes the Turing test in every sense of the word but in fact it doesn’t have the same humanness that I have, then it does feel like betrayal.

Per: Yeah, exactly.

Chris: Or not betrayal. That’s worse because you know somebody had to create that algorithm. You might be betrayed by them. But just like — you’ve been led down this sort of primrose path, this empty thing that looked really beautiful but had no ultimate meaning. You guys are actually hitting on the other project that I’m working on in the background. We should talk about sometime about the role of randomness in a generation. It’s a favourite topic of mine, but a giant …

Per: OK.

James: We can save that for another day.

Per: It does seem like everything is moving in this direction now, because I was reading two articles, irrespective of that — that we were going to interview you today. I saw two articles just yesterday about this robot lawyer that was helping people with traffic tickets, with parking tickets.

Chris: Yeah, I read that …

Per: And digital medical diagnostics that machines are doing a better job than humans at diagnostics now when it comes to diseases which is really interesting as well. So we won’t have jobs in the future basically.

Chris: Well, that’s another big cultural question that we have to face in — as we start to include a lot more artificial intelligence in our lives. Humans aren’t the best at everything that we have had to do in the past. We’re going to have to sift out what we want to give and should give computers even at the cost of a little bit of ego and what we are really good at and then answer the other questions about, “OK. Well, if computers are good at 99 percent of things, and we still want to do some of those things, what do we do as a culture?” It’s a fascinating one that this topic is going to necessarily raise.

Per: Yeah. So what will people coming out of your workshop, what will they be able to do?

Chris: Well, I certainly think what we want to be able to do is to first expose folks to this way of thinking. Certainly listening to this podcast might be well if I was persuasive enough. But also to get your hands on it, to take an existing problem and then try to rethink that problem from being a tool, to being an agent, and touch a little bit with all of those scenarios that I talked about, both setup and giving a hood to look under, a launch button. What does it mean to monitor? What does it mean to — for the system to gracefully degrade when it fails? All those things begin to get a lot more meaning not when you hear them but when you try them yourself.

Per: OK. We’re actually going to be moving into our question session, which is something new we’re doing on UX Podcast. We’re ending our interviews with questions that — James will have two questions for you. I will have two questions for you and you have to grade these on a seven-point scale. You have to grade them. You’re not allowed to comment on them. You can comment on them after we’ve asked all four of them.

Chris: OK. Wait, what is the grading scale?

Per: One to seven.

Chris: No. I mean, but what does one mean and what does seven mean?

Per: Ah, that’s a good question.

James: Good question.

Per: Seven you agree the most, one you agree the least.

Chris: Oh, got it. OK. OK.

Per: Yeah.

James: Well, seven is most, one is the least or less.

Per: Yeah.

Chris: OK. That I least agree with. OK, got it.

James: When we read the questions, you will get it.

Per: So I could start off with — because mine is related to what we were just talking to.

James: So mine is not.

Per: On a scale of one to seven, how afraid should we be of artificial intelligence?

Chris: You’re not going to let me comment. OK, OK, OK. Guys, that’s so mean. I will say three. But that’s going to entail some conversation.

James: Right, yeah, OK. Yeah. OK. I will throw in another one. So on a scale of one to seven, how good can I expect agentive technology to catalogue my music collection for me?

Chris: Seven — ah, six! Six, six.

James: Yeah.

Per: OK. On a scale of one to seven, how important is it for UX professionals to understand agentive technology?

Chris: Five.

James: Here’s my last one. I think this is the one that Per was inferring to when he said about not being connected to what we’re talking about. So on a scale of one to seven, how do you rate Star Wars: The Force Awakens?

Chris: Two.

James: Oh!

Per: Oh!

James: Oh! You see now, oh …

Per: That was the best answer.

James: This whole thing about not talking about — not discussing them, it’s kind of, “What?”

Chris: No, I actually love the framework because it’s like — it forces me to pre-think what — how that discussion is going to go and obviously changed my answers during the middle of it. Those were great. But yeah, they all bear some discussion.

Per: You want to comment on one of them before we wrap up?

Chris: Well, I think you guys want to know my opinions on Star Wars.

Per: Yes.

Chris: So I gave it a two partially because I’m just thinking about it from my own personal enjoyment, which is that I had already anchored to episodes four through six and episode seven was such a retread of four. That I didn’t think it added much to the canon. It camped way too much on our pre-existing love of the characters. It gave us a few new things, a few great new twists and certainly I love a lot of the choices that Abrams made but I don’t need to — I didn’t need to see it. I’ve already seen that movie and I thought it was pretty good the first time.

James: I can see why you’ve done that, yeah. I probably would knock points off as well for that. But I don’t think I will get down to two.

Per: No.

Chris: Well, OK, and the other thing, I was going to give it a little bit higher but then I shifted to, oh well, I will answer for Chris, because I think what Abrams needed to do culturally was to tell the world, “I will be responsible with this beloved world.” One through three just made us realise that Lucas should not be in charge of his own world, right? And Abrams is saying, “No, it’s cool guys. I’ve got this and you can trust me,” and I think that was the first important step for him to take sort of culturally because we had lost faith and I’m really look forward to eight in 2017. Is that when it’s coming out?

Per: Yeah.

James: Yes. Yeah, you’re right.

Chris: Because I think Abrams will say, OK, now that I’ve gained your trust, we’re going to go for a little ride and that’s the one I’m really looking forward to.

Per: Yeah.

James: That actually does tie back into the topic and is interesting because now we’re talking about the building trust thing again. If we see the entire Star Wars movie collection as an artificial intelligence, it’s now trying to reassure us and learn and get better next time.

Per: Wow, that’s fantastic James.

Chris: Yeah. I think you pulled it back. That’s really good.

James: It has been absolutely excellent talking to you again.

Per: Yes, as always, and looking forward to seeing you in Portugal as well.

Chris: Yeah, we will see you there.

Per: Yeah.


Per: Right? There was a ton of references to movies in there. I just want to make listeners know that we will be posting links to those in our show notes. I will try to listen back and see if I can find all links to them at IMDB.

James: That’s your homework then, Per.

Per: Yeah.

James: And also if anyone — we’re bound to have gotten some of them slightly wrong. I think we’ve said this disclaimer before when we’ve been talking to Chris. I’m not as good as spontaneously remembering these references.

Per: Yeah. But that was so much fun actually hearing him talk about that subject because it’s really interesting and scary. I think it’s a lot more scary than he led on.

James: It’s challenging. He was challenging as well rather than scary. I think we — we’re the ones that are going to let it be scary. I think we can work on this to make it reassuring.

Per: Yeah. Maybe then we can see that — that’s our job. It’s our job as UXers to make this reassuring because it’s going to happen no matter what.

James: And I think if we tie up enough data points, then maybe we can avoid — like I mentioned, maybe we can avoid some of the nasty scenarios because we’ve got a world full of data that will help us deal with that in a good way.

Per: Yeah. So let us know what you thought about the show and it’s time for our listener’s show — listener survey, sorry.

James: See now we’re going to get bad ratings now because you messed up the introduction to it. Yeah, we have a yearly survey that we do as good UXers that we are. We need to have your feedback in order to adapt, learn and grow and have some feedback on what we’re doing. We generally do this every March every year.

Per: I can’t remember the URLs. You have to say the URL.

James: Well, do you mean you don’t remember something from last year? It’s really quite straightforward. It’s

Per: Ah, that makes complete sense.

James: Yeah, S-U-R-V-E-Y. It’s also linked from the navigation, the menu on the website and on top of that, we will put in the show notes for the show, so you can easily access it from your device when you’re listening to it, if you’re in a position to actually use your device while listening. Don’t let go of the wheel.

Per: I’m sorry beforehand for the poor design of the survey. It’s all on purpose.

James: Not the questions. You mean the look.

Per: Yeah, I mean the look of it.

James: Yeah.

Per: Yeah.

James: We didn’t manage — this is a — we had to make it work in mobiles. So this is how it goes.

Per: Yeah, yeah.

James: But we really appreciate your feedback and it would be nice of you if you just visited that and spend a couple of minutes filling it in.

Per: And you all know, you can find us pretty much everywhere as UXPodcast on Instagram, on Twitter, on Facebook.

James: Telegram?

Per: Telegram. We have Telegram now. Yeah, we have a channel, a broadcast channel on Telegram. So if anybody uses Telegram, which is my new favourite chat client, then find us there as well. Yeah. It will be cool if somebody actually joins. We’ve had one person join as of yet last week.

James: Two if you include me.

Per: Yes, that’s true.

James: And you can also sign up for our backstage mailing list by direct messengers on Twitter or just email us your email — well, just email us at and I will add you — I will personally add you manually to the list.

Per: Oh, that’s your job.

James: Yeah, that’s my homework.

Per: Remember to keep moving.

James: See you on the other side.


[End of transcript]

This is a transcript of a conversation between James Royal-Lawson, Per Axbom and Chris Noessel recorded for UX Podcast in February 2016. To hear more episodes visit or search for UX Podcast in your favourite podcast client.