AI in the news — #22

Earl Wajenberg
Personified Systems
2 min readJun 9, 2016

Off-label AI (Phys.org)

You have probably heard of a medicine for one disease turning out to be effective for another, totally unrelated disease. It happens all the time, but not nearly enough. Various groups collaborating at Johns Hopkins are using the “deep learning” AI technique to have neural nets predict new therapeutic uses of drugs.

Uncannily cute (Tech Xplore)

Zenbo is the latest in the new class of “assistive robots,” meant to do things like help the elderly live independently. It has abilities like Siri or Cortana, only it also follows you around the house and projects images. It is also relatively cheap, which is new.

And it is. So. Cute. The inventors obviously wanted to avoid the famous Uncanny Valley — so badly that I think they backed into the uncanny valley one over. The thing is tiny, with a big head, no limbs, and enormous eyes. I think it looks like a mechanical fetus.

Jackrabbot (Tech Xplore)

Speaking of small robots that can get underfoot, Stanford’s Jackrabbot at least tries to not do so. Its mission in life is to learn pedestrian traffic patterns, and thereby avoid getting in the way.

There’s reliable and then there’s trustworthy (Nature)

We include this one to point out the differences between “reliable” and “trustworthy.”

The Stampede supercomputer at the University of Texas has recently finished the proof of an esoteric mathematical theorem. Problem: the proof is 200 terabytes long, so there’s no way a human can check it directly. Your confidence in the proof is less than or equal to your confidence in the software that wrote it. Let’s look at that bug list again… (Fortunately, they have a program-checking program that is much simpler.)

This is not a new problem for mathematicians, and as you see they have found ways to validate their confidence in the computer. But they “trust” it only in the sense that you “trust” a reliable tire gauge. It works correctly in its context, but that context is purely abstract (for Stampede) or purely mechanical (for the tire gauge). What’s wanted for an AI is trust in a social context, much subtler and qualitatively different. Even a self-driving car is in this position, because traffic is a (fluid, shifting) society.

Rather than ripping HAL’s circuit boards out. (Tech Xplore)

Some things you don’t want an AI to learn. You may recall that HAL 9000, the infamous AI of “2001: A Space Odyssey,” plotted against the human crew and could only be stopped by tearing out his circuitry. All in order to prevent the human crew from interrupting HAL’s pursuit of the mission.

People still worry about that. “This paper explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator.”

--

--