Interacting with a World of Connected Objects

A quick write-up of a session at the 2014 #foocamp hosted by O’Reilly Media

Tom Coates
Product Club

--

For those of you who don’t know, ‘Foo Camp’ is a yearly event that O’Reilly Media does for friends of the company. We all decamp to Sebastopol and do a sort of unconference with drinking and talk a bit about what things are interesting us, the problems we’re having and ways in which we think technology might evolve in the near future. I’m lucky enough to have been invited a few times over the last decade and l look forward to it enormously.

The last couple of years, it was a little difficult for me to completely connect with the event because the little company I’ve formed with Matt Biddulph was very much in progress and I didn’t feel like I had an enormous amount I could talk about freely. To some extent that was still true this year, but one question has definitely been rattling around in my mind that is related to what we’re currently working on but not a giveaway, so I decided to do a session on it. And because Matt Jones decided to write up his session, I’ve now been guilted into writing this one up too.

First a little background. The last few years have seen an explosion of Internet of Things discussion and a lot of new ideas pushing into the world. It’s been (as I’ve said in a previous piece) an exciting time to be involved in the space. But it definitely feels like it’s still the earliest days and that the discussions are mostly incremental.

Just for one moment, I wanted to push out a little further and see if we could make a little bit more of a goal for us to work towards. So the session basically had this as its premise:

Specifically I wanted people to imagine a world (let’s say) twenty years from now in which every piece of infrastructure, every new device we buy—everything that uses electricity essentially—can belch out information into the ether. Some of those objects would also be able to take commands of one form or another. And the core question — how the hell do we (or will we) make sense of it all?

Quick aside: I’ve been thinking about some of the questions about what future objects should be pushing out into the world for a while, and in a few of my talks over the last two or three years have basically been arguing that we should put the core intelligence into the cloud, and leave the objects as a set of switches and sensors that can pump out or react to these basic things:

Things every device in the world should do (at least partially inspired by Bruce Sterling’s work on Spimes)

I think my thinking has evolved a bit further since then, but basically that core point still stands — it’s possible for us to write a core list of things that every object in the world should do upon which we can build interesting services. And—back to the subject of the session—if every object in the world is doing those things, how the hell do we make sense of it all.

Anyway, to give some extra context to people in the session (who were numerous and very clever) I’d asked a few people I knew who had interesting positions on this stuff to give a couple of minutes of initial commentary — in particular Matt Jones, my co-founder Matt Biddulph and Kati London.

Mr Jones has been thinking in interesting areas like this for years with BERG (among other places) and started us off thinking about a few angles focused around objects with personalities. I think he and the BERG crew were among the first of our current era of designers to think about how objects that did things of their own accord might start to feel like creatures or things with intent and they’ve explored that in a number of ways. Hence projects like Little Printer and Schooloscope which used a form of Chernoff Faces to express the different personalities of different schools.

In fact some of you might remember the first round of conversation about intelligent agents on the internet. The idea clearly registered with Matt because I remember him talking about them when we worked adjacent to one another at the BBC at thousand years ago. At that time, he was very interested in agents that would go and gather news articles or information for you. He talked about them then in a sort of Philip Pullman style (best picture I could get was from the movie — sorry):

More recently he did a talk at Webstock which is well worth reading in which he talks about agents and their place in a new world of animistic technologies. It’s very cool stuff and (with entities like Siri) feels like it’s getting ever closer to manifesting in the world…

Another aside: This all reminds me of a sort of argument that I think I saw Adam Greenfield and Mike Kuniavsky have in parallel talks at ETech a thousand years ago about whether magic and animism was a useful metaphor for thinking about ubiquitous computing. God knows how many years on, this feels like a discussion whose time has finally come.

Mr Biddulph came next and talked about a few things that he and I have been playing with as well as some of his higher order thoughts. He fleshed out one of Mr Jones’ core issues mentioned earlier — that of objects that act without human intervention feeling ‘spooky’ and how that manifested in human perceptions of objects’ different personalities. You might have the computer that you think is out to get you, or the car that just seems to read your mind when you’re driving so you love it and pet it and treasure it. I don’t remember if he mentioned it explicitly in the session but he also brings up Genuine People Personalities a lot (post-Douglas Adams) both in terms of making sense of explaining why objects do what they do, but also—I hope—to point out some of the grotesqueness of every object in the world talking to you incessantly about their opinions.

“All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done.”

Yet another aside: For me the idea of objects with personalities has an interesting series of debates around it. At one end you get the ‘joy’ of Beauty and the Beast where all the objects are dancing around and cheering and chatting to one another and some of them are French.

https://www.youtube.com/watch?v=afzmwAKUppU

At the other end there’s a sort of austerity that views this stuff all as terribly kitsch, overwhelming and unpleasant, where the objects around us should disappear into zen-like polished white environments and barely surface at all. It’s probably unfair of me to associate this with Adam Greenfield, but I think it’s fair to say he’d hate the world to be like Beauty and the Beast.

Back to the session — Matt also brought up another concept that he and I had been thinking about a bit which is the idea of a chat-room for a home. Essentially that one way to handle this disparity between endlessly chatty places and invisible seamlessness might be to create a parallel space in which objects could speak in human-readable language. Much like a conference might have a chatroom, so might a home. And it might be a space that you could duck into as you pleased to see what was going on. By turning the responses into human language you could make the actions of the objects less inscrutable and difficult to understand.

Finally Kati would have talked, but was actually caught up with something else and arrived late so instead I summarised a little bit of the work that I’d seen her present at O’Reilly’s Solid Conference a few weeks earlier. She’s currently working at Microsoft Research and had been talking about a project to take urban data and make it comprehensible by giving it a personality and even digesting it down into little cartoon fragments that could be grokked in moments. Her talk (again) was called Humanizing Data: Implementing a Genuine People Personality Server. I can’t find video online for it in an open place unfortunately, but if you get a chance to see her speak you should. She also talked about a lot of other personality-based technologies she’s worked on including the amazing Botanicalls. Her talk was awesome. Here are a few screencaps from a non-open source:

I particularly like the robotic way of saying “CALCULATED EMOTION: IMPATIENT” Do it in Dalek Voice for the best effect.
This shows summary cartoons created each day showing the status of any given NYC neighbourhood.

With the intros out of the way and a few of the core issues surfaced we went into a larger brainstorm structure, with a goal of fielding people’s comments into two main sections — problems that we could imagine being generated by this world of connected objects (or that already existed) and UI metaphors that we might be able to use to understand this world. I can’t possibly recount all the brilliant comments and thoughts and people who contributed, particularly as I didn’t know many people’s names, but I think I should make a nod to Kathy Sierra, Linda Stone, Dan Saffer and Scott Jenson who were all gracious enough to contribute. Everyone else, thanks so much for your participation.

At around the 45 minute mark we’d constructed this list which I found really interesting and has triggered a lot of ideas for me:

A few interesting things emerged for me — how much of organising your smart objects feels like a chore for people. There was lots of discussion about having to become a software engineer to understand the rule-making systems, and some conversation about how you should be able to buy some off the shelf structures like “Crate and Barrel’s Smart Home Ruleset” (I believe that was a Sierra-ism). There were another set of conversations about how hard these things were to set up and how much data and noise people might be expected to deal with. I don’t think I was particularly surprised by any of the particular issues people had with connected objects, but I was definitely surprised by how strongly people felt them. It hadn’t really occurred to me that people might actually feel in conflict with their environments — the benefit/cost of being deeply immersed in technology on a daily basis, I suppose…

Another issue that came up was about how some of this technology assumed people lived very organised and structured lives with little variety and no messy events or actors within them. Houseguests, less-technical partners, pets and children all came up as wonderful elements of everyday life that technologists just seemed to have forgotten existed. While I think a bit of that feeling is about pushing blame for feeling confused about new tech onto the old stereotype of the slightly autistic, technical white male (I mean all technology starts in a non-perfectly thought-through form) there’s no denying that most contemporary systems deal with these situations very poorly indeed.

I think it was in the section on metaphors that the session really came alive though. Matt Jones talked about BERG’s work and how they’d started to think of objects as fitting into categories from the inert to the actively participating — ‘saucepans, houseplants, puppies and people’ were I think the categories he suggested. If you’re putting out a smart fridge, where does that fit? It’s probably closer to a houseplant than a person, and has equivalently limited needs to communicate…

And of course the personality element was very present again — to what extent the future was instruction-based remote controls (physical, app based or voice-controlled with the object merely taking instruction) or verbose and acting in the world, like JARVIS from the Iron Man movies, or HAL from 2001: A Space Odyssey. Or was the ‘voice’ of the future more a hundred billion voices, with every object expressing itself separately and collaborating behind your back…

JARVIS
HAL

Super quick aside this time — this conversation reminded me that I’d sketched out a tiny picture of a toaster thinking about its life that I don’t think I’d ever really put online. So here it is:

The end of the session was probably the most revealing part of the whole enterprise. I’d decided to do a quick pop-quiz about how we thought the future would unfold, and very quickly people agreed that there was a significant difference between what they wanted to happen and what they thought would happen. And when we did a quick hand-raising exercise at the end of the session, the results really surprised me. Again this is a highly technical audience but the general consensus was that some kind of concierge experience combined with a lot of physical interfaces roughly like we operate with today was what people wanted. But the general fear was that we’d end up with a bunch of dashboards and silo’d apps in perpetuity and ad infinitum.

Anyway, thanks to everyone who participated in the session. I learned a lot and really enjoyed it. I hope you guys did too. And to anyone who has managed to read this thing to the end, well done! Now let’s see if we can collectively create a world of smart objects that makes sense to everyone...

Tom Coates is the co-founder with Matt Biddulph of Product Club — a new product development and invention company based in San Francisco. We’ve recently moved from doing consultancy to working on a really exciting start-up project that we’re not quite ready to tell you about yet!

--

--

Tom Coates
Product Club

Co-founder of Thington Inc. building a new way to interact with a world of connected devices, based in SF. Previously: Brickhouse, Fire Eagle, BBC, Time Out