Victor Bezrukov, Hard Decisions

Decide for Me

How privileged AI might save our brains

Claire Willett
I. M. H. O.
Published in
3 min readOct 21, 2013

--

The other day, I read a wonderful piece in Nautilus Magazine about the teacher-student relationship held between a handwriting recognition scientist, Vladimir Vapnik, and the algorithms he was training to do the recognizing. In general, data points are technical measurements; in Vapnick’s approach, they are experiential metaphors. In general, a would-be AI is force-fed somewhere between hundreds and millions of these points. Vapnik gave his algorithms 100 poems, each describing a different handwritten 5 or 8.

We humans use, create, and pass on metaphorical concepts (time is money, communication is sending, a right-slanted 5 is dangerous) to make quicker sense of our worlds. Vladnik’s results indicate that a computer trained on this type of experiential information can learn far faster, with far less data than one trained gavage-style. In the (perhaps very near) future, metaphor might be the defacto conduit for artificial intelligence. And if it does, I suspect that rather than rule the world, robots will help us rule our own.

How? By taking over part of our decision process.

In a New York Times Magazine feature on decision fatigue, John Tierney wrote about prisoners who appeared before a parole board on the same day. The first man, who spoke to the board at 8:50 am, and the third man, who spoke to the board at 8:25 am, had the same sentence (30 months) for the same crime (fraud). Yet only the first man was granted parole.

Sure, it could have been because the first man was more penitent, but Tierney points to a different reason: by the late afternoon, the board was suffering from decision fatigue. “No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price,” he wrote.

But you know who can? Computers. They already make decision after decision in finance and ad-tech, Doing so for humans just requires a different type of training data.

Skimbox is one early example of this. Much as one poet’s descriptions taught a computer to recognize handwriting, your past and current actions teach Skimbox’s neural nets about relevance, immediacy, and never-need-to-see-thanks. Thus equipped, Skimbox can presort new messages in order to offload the “do I need to read this?” decision.

And there are other apps that either make or cue-up other decisions. Google Now uses your Google account data to offer pre-emptive suggestions and alerts: leave in 14 minutes to arrive at the Father John Misty concert on time; go here to park your car, oh and here’s your ticket.

The sleek thermostat Nest learns your temperature preferences from your initial adjustments, gives you time-based temperature recommendations, and notifies you of energy savings. And then there’s Liveson, AI that studies your language and tastes so that it can tweet as you after you pass away. It’s a morbid and silly concept, but then, there’s no reason why you’d have to die for LivesOn to start tweeting.

Now, let’s say you used all of these products in conjunction. Imagine the time and brain power that would save! And these are only the start of what privileged AI could do.

To quote Styx: “Domo arigato, Mr. Roboto.”

--

--

Claire Willett
I. M. H. O.

Data rustler @ Conde Nast. Scribbler @ large. Je veux être la fille avec la plupart de gâteau. New York, NY · clairewillett.com