We are the people they warned you about (part 2)

Chris Anderson
4 min readNov 25, 2017

A few years ago I was on a flight, working on my laptop, when the flight attendants came by to serve dinner. I closed my computer and picked up the book I had been reading before. It was Kill Decision, a great sci-fi novel about how robots, including swarming drones, run amok. In the book, the drone developers had modeled their swarming algorithms after weaver ant behavior, which turned out to be a bad idea because, well, ants. It’s not giving too much away to say that the swarming turns into killing pretty quickly.

Then they cleared the meal and I opened my laptop and returned to work, which happened to be writing the code for, yes, swarming drones. I paused for a moment and wryly thought “I guess that’s how it happens” and then went right back to coding.

What else could I have done?

Choose not to write swarming code at all because it might someday kill us? But if I stopped writing the code, someone else would just do it instead, or I’d have to do something else less interesting on the flight.

Somehow write better code that was less likely to run amok? I don’t even know what that means, but suffice to say at this stage I’d be lucky if the code ran at all. Even if it did run and the drones “swarmed”, the reality is that the actual instruction of interaction between the drones just basic if-then rules and algebra. Aside from standard code quality checks, I don’t know how to insert “anti-emergence” protections into software, to say nothing of ethics.

Better I keep write the swarming code, not because I was necessarily more trustworthy or responsible or even better at coding than anyone else. It was just that’s what I was interested in doing that day and the in-flight movies sucked. And so I kept coding.

Needless to say, my particular swarming code was never finished, and even if it had been other people’s code was much better. There are now dozens of open source versions out there you can use. None of them show any signs of displaying emergent weaver ant behavior, but I guess that’s exactly what the narrator in the sci-fi book would say at this point, right before the plot turns.

I’ve been thinking about this moment as I watch the debate on regulating AI. On one hand, very smart people like Elon Musk and Sam Altman are warning that “General AI” is on an exponential improvement track that may quickly exceed our ability to understand, predict or control it.

On the other hand, all the proposals to regulate it seem as nonsensical as me stopping my airplane coding project because I read a scary science fiction book.

Ben Hammer, the Kaggle CTO recently put it well in a tweet:

Replace “AI” with “matrix multiplication & gradient descent” in the calls for “government regulation of AI” to see just how absurd they are

To which Dave Venable, a security expert, replied with proposed legislation to show how absurd that would be:

Congress shall in no way restrict the size of numbers which may be multiplied together or the modulus by which they may be reduced.

They’re right: on one level, AI is just math — multiplying numbers together in ways that progressively give better solutions (“gradient decent”). And unlike, say, nuclear physics, which is also mostly math, there’s isn’t even necessarily a physical element of AI to regulate. It’s just software running in the same computers we use to run our day-to-day lives.

This suggest two things:

1) The people who practice a potentially dangerous craft are too close to it to effectively regulate it. Yes, Ben’s phrasing is reductive, but it is also the way AI practitioners think. Zoom in on AI and there is no “AI” — just math.

2) The term “AI” is too vague and sweeping to be a meaningful dimension of regulation. It would be like regulating nuclear weapons by regulating “physics”. You need a regulatory unit closer to outcomes, such as computational power used by AI, the way crypto regulation didn’t ban math, but rather cypher-cracking computing power and encryption key length. And nuclear regulation ended up focusing on plutonium production and uranium enrichment facilities.

When smart people say things that don’t seem to make sense, this normally means a dimensional problem. They’re simply thinking a different level of abstraction, or using words in different ways than we are. Once you can agree on terms and focus in on the same dimensions, you can normally reduce these arguments into something that can at least be discussed, like math.

By the way, I feel the same way about Asimov’s Three Laws of Robotics. Any robot smart enough to properly interpret vague imperatives like “A robot may not injure a human being or, through inaction, allow a human being to come to harm” is too smart to control. They will have passed us long before we come close to knowing how to code that.

--

--

Chris Anderson

CEO of 3DR. Founder of DIY Robocars, DIY Drones, Dronecode, ArduPilot. Formerly Editor of Wired. Author of Long Tail, FREE, Makers. Father of five.