Machine Learning Could Multiply Your Human Agency

Exploring an interface for the physical world


We are in the middle of an interface revolution. This revolution is transforming not only how we interact with our computers, but also how our computers interact with the physical world. For the first time, the power of software is on the verge of escaping the bounds of the screen.

Credit: Apple

A two-sided revolution

There are two sides to this transformation. The first of which is the manner in which you — a human being — interact with and communicate your will to your computer. For pretty much the entire history of computing, the interface has been mostly a one-way conversation, in which you manipulate buttons to tell the computer what you want it do to. The computer simply presents a pre-determined interface to you, and it’s on you to navigate through and act on these layers of interface. Most of the time this means that specific functionality is grouped together, and kept entirely separate from the interface for other functions. This of course, is how apps work; each app does something specific, and it’s up to you to manually navigate between the interfaces for each specific app and each specific function you want to use. You open the Uber app to call a ride, and open the Music app to play a song — each app handling a specific, narrow function.

But recently, this static, self-serve interface format has been giving way to something new: Conversational User Interfaces (CUIs). CUIs come in a number of formats, be it a voice based interface — like that of Apple’s Siri or Amazon’s Alexa — or a text-based chat-like interface, popularized recently with the resurgence of chat bots. Chat bots have made a big appearance in the productivity messaging platform Slack, and also have spawned stand alone apps like Operator, which offers personalized shopping via a chat interface. Whether voice or text based, both of these interactions are fundamentally forms of CUI — in which you communicate your will to the computer not through button presses, but through language and dialogue.

Now, a recent trend has been the rise of numerous specialized CUIs. You might have one chat bot that handles setting up your Slack profile, another than helps you shop for things online; each tailored to perform one specific function. On the other hand though, you have more general purpose CUI assistants like Siri and Alexa, which aim to be the central point of interaction for all sorts of functionality.

The wonderfully insightful writings of Chris Messina and Jonathan Libov have had a huge influence on my thinking on the topic of conversational interfaces, and I highly recommend you check out both of them:
2016 Will be the Year of Conversational Commerce — Chris Messina
Futures of Text — Jonathan Libov

General conversational interfaces are wonderful because instead of having to manipulate and navigate layers of UI yourself, the computer brings the relevant functionality to you. Take for example playing a specific song, let’s say Lost in the World by Kanye West. With a traditional UI you’d need to take out and unlock your device, navigate to and through the Music app’s UI, eventually finding and tapping on the icon or list item for Lost in the World to start playing it. With a conversational interface, you simply get Siri’s attention, say what song you want to start playing, and you’re done! You don’t even need to pick up or unlock your device.

And this music playing example is already old news these days. What’s really exciting though, is now these conversational agents are starting to be given access to more of our data, and more of the services we use. The beautifully simple interaction of language and dialogue can now be used to effortlessly access and search your files, or even call a car to pick you up; and this second use case of calling a car is where things really get interesting, as we’ll see towards the end of this article…

iOS: Now conversational first. Credit: Apple.

Digital Personas

I believe general purpose CUI assistants are the early forms of what will be the most important interface paradigm of the coming decade. I call the soon-to-arrive upgraded form of these conversational assistants Digital Personas. There are three distinct attributes that set Digital Personas apart from current conversational assistants. Firstly, Digital Personas maintain context for you throughout space and time. As you move through your day, your Digital Persona will remember things in time, like what you had asked it earlier that morning, and even come to have a sense of when a good time to remind you about certain things might be based on your usual schedule and moods. Additionally, access to your Digital Persona will be ubiquitous. Just as you can log into your Facebook account and see your messages on just about any device today, soon a more complete picture of your computing context will be available just about anywhere, courtesy of your Digital Persona. We are beginning to see the early stages of this with what Apple has been doing with iCloud lately. Already your computer desktop, messaging conversations, files, application windows, and even Siri are available nearly seamlessly across all varieties of Apple devices. When this ubiquity of computing is perfected, your Digital Persona will summon your entire computing context to you whenever and wherever you need it.

The first stages of ubiquitous computing becoming real. Credit: Apple.

Secondly, Digital Personas, as their name suggests, are personal. Your Digital Persona will be aware of your schedule, your conversations with your contacts, your preferences for music and aesthetics, and more. In a sense, the Digital Persona becomes representation of your own personality. This allows the Digital Persona to act as a personal ambassador, customizing experiences you encounter in your daily life. This is an important distinction from current CUIs, because each person’s Digital Persona will come to be unique depending on the needs, preferences, and desires of its user. For example, the Digital Persona may come to learn you are a die hard UCLA basketball fan, or even what your favorite restaurant is. Through this natural familiarity, the Digital Persona will be able to more accurately and proactively serve you. Through this adaptation, your Digital Persona becomes not just an interface, but a loose representation of yourself in digital form. Now clearly, the notion of having a Digital Persona that knows so much about you and your life raises a number of concerns when it comes to privacy. However, the work Apple has done lately with differential privacy in iOS 10, and their ability to keep any machine learning that directly involves your private data locally on device, has me hopeful about the privacy outlook. I believe the same privacy protecting techniques can be preserved while enabling the deep personalization Digital Personas could one day offer.

Here’s a great write up by Wired about Apple’s latest privacy techniques:
Apple’s ‘Differential Privacy’ Is About Collecting Your Data — But Not ​Your Data

Thirdly, and perhaps most importantly, Digital Personas will serve as a central point of interaction. This is a powerful combination of the centralized interaction CUIs of today already provide, and the personalized attributes of Digital Personas. As stated before, this enables Digital Personas to act as personal ambassadors between yourself and the rest of the digital world. Now in a purely computing context this all seems a bit idealistic, and perhaps even unnecessary. If you’re anything like me, the stuff about ‘personalization’ above started reading less like a useful feature, and more like a privacy nightmare. However, the reason this personalization and awareness of context is so important, is the interesting use case we saw earlier; calling a car to pick you up.

Embodiments

Which leads us to the second half of this interface revolution. While Digital Personas offer a new form of interface for input, to really have a transformative impact on our lives, they need an equally powerful new form of output interface.

Since the dawn of computing, the power of software has been largely restricted to the bounds of the computer display. Software’s ability to act on the physical world has been mostly limited to only indirect effects, with the exception of large-scale industrial applications. However, it seems likely that in the very near future we will gain the ability to apply the power of software to the physical world of our daily lives. Astounding improvements in machine vision and machine learning are enabling computers to act on the physical world in ways never before possible. Autonomous cars are probably the most high-profile form of this transformation, but the real fundamental change here is that machines are gaining the ability to understand and navigate the physical world, and thus act in and on the physical world without needing direct human operation. I call the broad category of autonomous objects Embodiments.

Uber is aiming to be a big Embodiment provider!
”Civilization advances by extending the number of important operations which we can perform without thinking about them.” — Alfred North Whitehead

This quote of Whitehead’s is exactly what led me to the concept of Embodiments. Whitehead captures precisely what technology is at its most fundamental level. Ultimately, all technology is simply an extension, amplification, or automation of human ability. It allows us to do more in less time, or not have to do certain tasks ourselves at all anymore. And Embodiments lie directly in this lineage of technology as an enhancer of human ability. Quite literally, Embodiments will extend our ability to act on the physical world, enabling us to perform multiple tasks at the same time, and even in totally different places simultaneously. Autonomous objects will be Embodiments of our own human will and agency. And personal transportation is just one of the huge number of tasks we can have Embodiments handle for us — just as autonomous cars are just one of the many forms of Embodiments we will have at our disposal. Just imagine, the simple weekly errands that add up to consume so much of our free time could largely be taken care of entirely by these Embodiments. My grandparents talk of how in their childhood, it was common for families to kept chickens in the backyard for their eggs and even to eat. And this was in Los Angeles! Obviously this is not common place in Los Angeles today, 70 years later. I don’t personally know of anyone who raises their own livestock in the city anymore. Similarly, I can imagine my own future grandchildren 50 years from now thinking how ridiculous it was that people had to personally make trips to a bunch of different stores, just to pick up weekly necessities like food and house goods. This is because simple errands like shopping for staple goods are perfect tasks for Embodiments may take care of for us. You likely won’t have to go pick these items up yourself, because they will simply be picked up by the autonomous car you order before it picks you up from work, or an autonomous delivery truck might bring it right to your front door. And you can apply this same automation to many other frequently mundane tasks. You could have Embodiments washing your clothes, cleaning your house, and cooking a custom meal for you, all while you sit and read a book.

However, for the massive benefit Embodiments can bring to our daily lives to be useful, and dare I say humane, we need an equally powerful means of communicating our will the machines acting on our behalf. Physical extensions of ourselves will only be truly beneficial so long as we don’t become consumed with micromanaging them.

Early in-home Embodiments from GE. Credit: Amazon.
”By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race.” — Alfred North Whitehead

Couldn’t resist throwing another Whitehead quote in here. While Whitehead was talking about mathematical notation, well designed software can have a similar brain-relieving effect. Like I said at beginning of this article, we are on the verge of a two-sided revolution. Autonomous Embodiments liberating us from a great deal of physical tasks is only half of what we’re unlocking. The other half is the substantial mental burden automation will relieve us of. This is precisely where the Digital Persona comes into play. The Digital Persona will serve as the primary interaction point between ourselves and our Embodiments. In fact, you won’t even need to directly interact with many of the Embodiments working to serve you. Instead, the Digital Persona will handle the details, scheduling and coordinating of your Embodiments to work in concert; all while requiring little to no manual input from you. The Digital Persona is an amplification of your personal will, and can multiply your will in the physical world through Embodiments, which are an amplification of your personal physical agency.

We are on the verge of being personally freed from needing to employ our own mental and physical effort to do all sorts of small tasks. Freedom from spending an hour or more of your day operating a car is only first level. This liberation of mental and physical resources scales throughout society. This same revolution is going to free companies such as Uber from having to employ other humans’ mental and physical efforts in operating cars as well. The scale of influence in the world that once required many human employees could soon become available to individual people at the press of a button.


Thank you so much for reading these thoughts on where I hope the near future of technology might take us. If you enjoyed the read, I would be ever so grateful if you hit the like button below. And if you’d like to discuss technology or philosophy with me, I’d love to hear from you on Twitter: @Evan_Sullivan

:D