Betaworks CEO: There Will Be No Line Between Us and Our Devices

A critical look at artificial and augmented intelligence

John Borthwick
Backchannel

--

Artificial intelligence is back. Whether in the dystopian portrayals of recent movies or the utopian singularities dreamed of in the tech world, the general agreement is that we are on the path to thinking machines. But as fun, twisted and thought-provoking as the dystopian show Black Mirror is, I don’t believe machines are going to think or achieve a human level of consciousness any time soon.

I want to focus on a different dimension of our relationship to machines—how we are integrating computing into ourselves. How we are augmenting ourselves with technology. I believe this augmentation and integration is transforming us and our world faster than any external singularity event.

In some cases we are augmenting our intelligence, but in others we are dumbing ourselves down to accommodate poorly designed software or hardware. Like the proverbial frog that sat in a slowly warming pot of water, we are now boiling ourselves in this new world. Aspects of this integration are functional and clearly visible—wearables, watches, beacons and nearables, haptic triggers and navigation, virtual reality and augmented reality, even a bionic mattress that emulates the womb to sooth premature babies. The list is long and getting longer.

Will it be a nice God, and more on the coming AI revolution: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

As we connect devices and services, complexity rises exponentially. My IFTTT triggers connect to a set of web services that I use a lot, yet occasionally I experience unexpected collisions that take a while to unwind. For example, in the middle of a winter storm, my Nest thermostat got a point software upgrade and decided it was summertime. It took awhile to unravel the chain of interdependencies that resulted in my pipes freezing. We are at the beginning of this curve, right at the beginning. As a culture we are becoming dependent on the network and its increasingly complex interconnections.

Our personal phones are an obvious example. It’s fair to say my attention is either directly or ambiently connected to this device, and its network, throughout the day. So for a month I tracked my usage, to get a more concrete view of my dependence.

Tracking phone usuage

The graph to the left shows my phone usage from December 3 to January 6. On average I used my phone for 162 minutes a day (the blue line). The number of times I pick up or manually activate my phone is more variable—it’s the orange line—and averages out at 35+ a day. The average time per use is 6 minutes.

This isn’t far off the norm. The radio show New Tech City, on WNYC, has a project in which 12,000 people are volunteering their phone usage data. Estimates from that study suggest we use our phones for an average of 95 minutes per day and actually pick them up 50 to 60 times daily. In the spring of 2014 Millward Brown, a market research company, reported the results of a survey, which showed that people in the U.S. spend 151 minutes per day on their smartphones, as compared to 147 in front of TVs. China’s consumers use their smartphones even more, with an average of 170 minutes a day. Our mental focus on our devices is indistinguishable from immersion.

Step by step we are integrating computing into our selves, blurring the line between natural biological intelligence and augmented intelligence. Over the span of a year, my 162 minutes a day adds up to 41 days. That’s a lot of attention, and that’s only the directed attention — it doesn't include my ambient attention.

Here are three specific ways in which I see computers being integrated into the human experience: at times augmenting it, at times not.

Our Prosthetic Self

The 2020 Tokyo Special Olympics will be the first in history where across a majority of categories athletes will outperform the abilities of contestants in the Olympics. It might happen in Rio in 2016, but my hunch is that it won’t be till 2020 that outperformance will apply to a majority of categories. We saw the start of this in 2012 with Oscar Pistorious’s entry into the Olympics. Studies of his legs concluded that he was using 25% less energy than his biological counterparts. Once you add automation, the increases in efficiency will become even more pronounced.

As a society we are moving beyond viewing disabilities as limiting and restrictive, to them becoming an augmentation to our biological bodies. The chance to transcend disabilities has been a dream of people in this field, and it’s finally becoming a reality. It’s hard to get a sense of the scale of this, but I believe this development will affect a material part of the population. Extend what you think of as prosthetics today to include pacemakers, embedded corneal lenses, contacts (with zooming lenses), artificial joints, braces, drug implant systems, artificial skin and glasses and you get a sense of the scope of the change. Prosthetics, exoskeletons, soft exosuits and other physical enhancements can extend our natural limitations. With the assistance of machines and data we can more safely navigate our lives and surroundings.

I was at an event last year where I saw Missy Cummings speak about robotics, drones and what the Air Force refers to as the “Mode1 approach.” In Mode1 approach, the pilot relinquishes take off and landings because computers can do this so much more safely. Missy explained how the last thing she was required to do as a fighter pilot before taking off from an aircraft carrier was to hold up both her hands—in an “I surrender” gesture—saying to people on the flight deck: “Look, no hands!” Nice metaphor. Autonomous planes and trains are already in use. By 2020 self-driving cars will be on our roads, and as Sebastian Thrun and others have discussed, they will make our roads much safer.

Every time I see a wearable I ask myself—could this be integrated into our selves? I’m not making an ethical statement about our biological selves versus our prothetic selves. I am seeking to reset the framing of the computer as an object and us as the person manipulating that computer. Computers are no longer that “other” thing, that “other” object. The line between machines and humans is becoming indistinguishable.

This spring Apple will ship its watch. The product has spurred a lot of discussion: do people want a computer on their wrist? I don’t think that’s what this device is about, and it’s certainly not how Apple has designed it. On the Apple Watch promotional site, the company makes three product commitments: Timekeeping, new ways to connect, and health and fitness. The first is obvious — it’s a watch! Thankfully, Apple has remembered that. As for the second commitment, Apple says, “You’ll express yourself in new, fun, and more personal ways. With Apple Watch, every exchange is less about reading words on a screen and more about making a genuine connection.” Outside of the navigational crown, the Apple watch has a single button on it. Its purpose is to connect this device to its cousins worn by our friends and family in what Apple is branding as a genuine connection, as opposed to one that is mediated through an electronic interface. Apple is seeking to eliminate the interface of a screen and connect the device directly and intimately to our bodies. It’s the difference between seeing something and feeling something.

This connection exploits the device’s ability to use sensors and haptic feedback to communicate your heart rate, location, body temperature and other aspects of your state. You can even tap a special pattern which will be haptically transmitted to a special contact—sort of an intimate Morse Code. Developers of third party apps cannot control these aspects of the watch, it’s all Apple. Just as Apple designed the iPod and iTunes to be a tightly coupled combination (as compared with MP3 players, which were essentially portable, retrofitted small hard drives), the watch is designed to be tightly coupled to you and to your iPhone. In this post-computer world, Apple is aiming to create an integrated and far more personal device than the term ‘wearable’ suggests.

Our Data Skin

In the image above you can see the data shadows of people who use an app called Human. These are the paths people take while walking, running or biking in each of these cities. They aren’t simply maps of paved roads or footpaths, these are the desire paths. Doesn’t Hong Kong kind of look like a ghostly dragon? I love the contrast of the image of the dragon next to Bangkok and Houston. Fascinating data.

Back to the Apple Watch. Consider the possibilities when local data, GPS, accelerometer, temperature, atmospheric pressure, humidity sensors, heart rate, glucose levels, hydration levels and calories—are married with contextual information, such as your interests, your location, your history. Like a soft exoskeleton, data is becoming an extended skin that envelopes us, tracks us and informs us. Sometimes it’s visible, most of the time it’s not. Sometimes it’s opt in—when people choose to use apps such as Human—and sometimes it’s via passive surveillance technologies (as with the tracking of MAC addresses and IMSI catchers).

This data skin is going to connect and inform our prosthetics. I admit it—saying that sentence out loud sounds pretty weird. Yet this is already happening. Take a look at those mattresses for premature infants I mentioned above. They link the mattress to biorhythms of the mother, reading her data and providing the infant feedback as if it were still in the womb. By later this year I bet someone will have developed an app that connects the mother to that mattress, too.

Present Self?

In the world of virtual reality, “presence” is both a term of art and of science. It refers to a state that occurs when the user of a VR device finds the experience indistinguishable from reality — it’s short for telepresence. There are a set of technical thresholds that VR engineers and creators are striving for—improved optics, effective tracking, low latency, low persistence—but much of the work is about tricking the brain into believing that the virtual is real. The brain is highly malleable and as Michael Abrash illustrates, there are a host of methods to optically fool the brain when designing VR experiences and headsets.

The experience of presence is hard to explain—people refer to it as similar to being teleported—but radically different forms of entertainment are going to come out of presence-enabled VR. Marry this with a data skin and you have VR and AR experiences that are very hard to imagine today. And while VR might seem like it is the domain of hardcore gamers and stationary media experiences, products like Glyph or Magic Leap will move VR and AR to a much broader market. In 2015 and 2016 we will see presence-enabling VR experiences. In parallel we will start to see devices—some wearable, some prosthetics (those zoomable contact lenses)—that will illustrate VR and AR’s potential outside of gaming.

When I first heard the term ‘presence’ in reference to VR I thought it was referring to the present tense—that seemed wrong to me. I’m not present to the physical world when I’m immersed in a real-time jittering stream of connectedness.

I’m more connected than ever before, yet less able to connect with what is happening around me.

Years ago I took a meditation class and the first thing I was taught was a simple exercise: Sit down and cycle through your senses one by one. Keep your attention on each sense for a minute or two, one after the other—touch, smell, taste, sound and sight. The exercise is meant to ground you to what is actually happening around you, right now. I find it a remarkable and simple tool to connect yourself to your environment. Last year I tried sensory deprivation floating. It’s a wild experience, one that made me feel completely, almost eerily, grounded in reality. My senses—in particular, touch—were heightened in a way that I hadn't experienced before.

As the virtual and the real become enmeshed we need to find ways to distinguish the two. In a recent study, when engaged in media, 25 percent of Stanford undergraduates used four or more media types simultaneously. Four. Earlier this month at an event at betaworks, I saw someone who I hadn't seen for a year or so. He is young—in his twenties—and I always found him super smart, thoughtful and interesting. I watched him stare at this phone for much of the talk. His body language was visibly tied to the virtual experiences on his phone. He wasn't present in the room; he was numb to it. As Fred Wilson says here, we have a choice: we can either control our experiences and our relationship to our devices, or those devices will control us.

Dreams of AI

Dreams of artificial intelligence have come and gone before. Dating back to the 1950s the technology industry has been through several long AI winters. But today we are in an AI spring, maybe even a summer. Now the obsession is with data processing, scale, gigaflops, machine learning and the sheer horsepower computing can bring to the problem. There is a lot of interesting work going on in the field, yet much of it is narrow in its application. I remember watching Kasparov in 1997 play against IBM’s Deep Blue. This historic match has become a case study of how we anthropomorphize our technology and think the “hand of god” is intervening when what is happening is either our projection or is simply random. A single move in the second game pushed Kasparov to believe the computer was playing in a manner that he couldn’t compete with. Later it turned out that that move was a glitch. It was a random move that the computer executed as it ran short on time.

Deep Blue is a classic example of narrow AI. With a defined problem set, computers are infinitely faster and better than humans at finding specific solutions. From Chess to Jeopardy (“Watson supercomputer destroys humans…”) the field is full of remarkable examples of narrow AI. These victories have lulled people into assuming that thinking machines are around the corner or inevitable. I don’t believe that. It’s not a matter of degrees between processing data and symbols to understanding meaning.

Photos of Kasparov by Tom Degremont

The difference between processing data and reflecting or understanding or even creating isn’t linear—it’s a leap between systems. And computers have not even begun to process the way humans experience words and thoughts: “This plate is hot” …“I like this”…. How we experience thoughts is a hard problem. Our brains are structurally obsessed with emotion and experience. Yet inside the technology echo chamber, the din of AI inevitability continues to grow. And AI today is quickly becoming a suitcase word, to borrow Marvin Minsky’s term for words to which we bring our assumptions and prejudices. It’s also becoming a marketing hook.

There are many interesting pieces that I read on the subject while thinking and drafting this essay. One was a long, fascinating thread on the Edge, where they asked 186 people what they think about machines that think. This quote from Ziyad Marar struck a chord for me:

“If the welter of prognostications about AI and machine learning tell us anything, I don’t think it is about how a machine will emulate a human mind any time soon. We can do that easily enough just by having more children and educating them. Rather it tells us that our appetites are shifting. We are understandably awed by what sheer computation has achieved and will achieve (I’m happy to jump on the driverless, virtual reality bandwagon that careens off into that over-predicted future). But this awe is leading to a tilt in our culture. The digital republic of letters is yielding up engineering as the thinking metaphor of our time. In its wake lies the once complacent, now anxious, figure with a more literary, less literal, cast of mind.”

Most discussion and design of AI is predicated on the assumption that, somehow, as we reconstruct the world, we won’t reconstruct humanity. In fact, as software is eating the world, that includes us. I was thinking about this the other weekend when I found myself describing the myth of Icarus to my 10 year-old son. I was moved to re-read the whole story. Icarus was the son of the master craftsman Daedalus, creator of the Labyrinth. Daedalus is imprisoned within the walls of his own Labyrinth so he makes wings of feathers and wax so that his son could escape. As Icarus prepares to fly out of the prison, the father warns him of two dangers: complacency and hubris.

Fly neither too low nor too high. Fly too low, and humidity from the sea would dampen his wings. Fly too high and the sun will melt them. Complacency and hubris.

I see complacency as akin to subjugation to technology. This new world of software and hardware is being created by us in small and large ways — the products we make, the ones we use, the companies we support, the research we invest in. These are all decisions we make. Hubris is different but equally constraining. It assumes that the inevitable outcome will be positive. Our future lies in the flight path between the two — and it lies in the many small decisions that we make as we take that journey, and we create the things and tools to enable it.

In conclusion let me offer up a list of things I’m thinking about—more questions than answers. But we’ll need the answers as we integrate computing into our world and ourselves:

Augmenting intelligence and humanity: How can we design products, hardware and software in a manner that augments us as people? There are many examples where Narrow AI is making us smarter; technology needs to work for us, not us for it.

Autonomy: What does it mean to be an autonomous being in the world we are crafting?

Memory and trust: As human memory is augmented by the network, what do memories and trust mean? Watch the “Entire History of You” episode of Black Mirror to see what “perfect” memory can look like.

Morality: How do we bound decision making and construct a moral framework for autonomous machines? If you go back to Asimov’s Laws of Robotics, autonomous drones are already in violation of Law 0 and 1.

Complexity: How do we learn to live with the complexity we are creating in the network? Joi Ito summed up this question in his Edge post:

“We are descending not into chaos, as many believe, but into complexity. At the same time that the Internet connects everything outside of us into a vast, seemingly unmanageable system, we find an almost infinite amount of complexity as we dig deeper inside our own biology. Much as we’re convinced that our brains run the show, all while our microbiomes alter our drives, desires, and behaviors to support their own reproduction and evolution, it may never be clear who’s in charge — us, or our machines. But maybe we’ve done more damage by believing that humans are special than we possibly could by embracing a more humble relationship with the other creatures, objects, and machines around us …”

Language: We need the words to describe this transition. For example: I think it’s fascinating to see the word “glitch” enter the lexicon. It’s a perfect word to describe something that isn’t necessarily a bug, something we don’t and won’t understand the cause of. We need more words like that.

The humanities: While the humanities have so often led to the evolution of culture, today they trail it. The quote from Marar above outlines the issue. I think it’s vital to bring the humanities into the discussion and the design of technology.

This essay is adapted from from the Betaworks 2015 Shareholder Book. Each year we work with a different artist to illustrate the book — this year we had the pleasure of working with Henry McCausland. We do this because much of what we do at betaworks falls between technology and art.

I believe that at the fault line between these two disciplines there lies the flight path that Icarus needed to take. Technology from the internal combustion engine to nuclear to gunpowder—all of these formative technologies have come laden with dangers and risks—it’s in the details and the integration of them into our lives and culture that we have figured out how to make them serve us. It’s remarkable how these myths still serve us—they continue to tell the story of us and serve as guideposts as we recreate our world.

Let us never forget: Icarus flew too close to the sun. Hubris was his downfall. May the same not happen to us as we augment and integrate.

John Borthwick is CEO of betaworks

Illustrations by: Henry McCausland, as part of the betaworks 2015 shareholders book. Photos of Kasparov by Tom Degremont. Thanks to Steven Levy for edits.

Note: I spent the last few months reading about the current state of AI, if you are interested, you can find the background reading in an Instapaper folder (http://bit.ly/AI_read).

Follow Backchannel: Twitter | Facebook

--

--