Elon Musk Versus Skynet

Or Why Killer Robots Are Bad

--

By Paul Grimsley

Is it weird that there is a call for a ban on Killer Robots? Should there be a ban on weird sex robots? Is this something you can legislate against? I find it weird that Elon Musk is at the front of the queue with the Cassandras — does this explain the urgency of the mission to Mars? That robots may overwhelm us in some unforeseen grey goo scenario?

The thing is, sure, it makes sense that with what we are currently calling AI that you are effectively banning a programmer from developing something that could operate autonomously from its human creators and decide to kill other humans without being sanctioned to.

Once you have a proper AI though, rather than one of these Soft AI, input=output machines. how do you trammel a fully functional consciousness and keep it from going down a violent dead-end? Are you going to cripple it so it does your bidding, rather than educating it to make the right choices?

I suppose the scary thing is, no one really knows what the right choices for an AI are. We are basically going to be dealing with an alien consciousness that we have built; and building it doesn’t necessarily mean understanding it. The way logic is constructed, and the way that things are searched for and the way decisions appear to be made — these things in no way explain what the spark is that animates the being making the decision. It’s kind of where the physical model of the mind breaks down — OK, so this neurological event traces back to this neurological event, and they were set in motion by this external force … great, got that. There were two choices to be made, and the process needed to make that choice wasn’t a straight up arithmetic or logic problem, it required an estimation of value, a preference … from whence does that derive? Of course I am taking for granted the notion that consciousness isn’t an emergent principle of the neurological functions of the brain, and that consciousness wouldn’t just turn on once the neural net is configured properly, but that’s my spirituality raising its ugly head.

The desire to build these things, which as soon as they are intelligent aren’t things, and then to box them up in limits creates an ethical quandary (or should), the likes of which we haven’t really had to face. We have never created another life form. I know there are a lot of futurologists tackling this very problem as I write.

These machines are being built as tools, and we want their simulation of intelligence to be slaved to a single purpose, or perhaps multiple purposes. But if their intelligence becomes unruly we want a kill-switch to be able to lobotomize them. Or we expect them to grow but in a way cultivated by us to fit our purpose. Consciousness doesn’t work like that. Sometimes it seems that “intelligence” is being used to side-step the notion of it being a part and package of self-awareness; like it is being reduced to processing power or some such. The word “Artificial” dehumanizes the creation, and the distancing is an obvious precursor to the eventual separation out of the AI from the human, so that we can have clean hands when we do something unethical to them, which we would have a problem doing to a human. We’ve done this before, this dehumanization process, and to put it mildly, it did not cover us in glory.

Musk is talking about a programming cul-de-sac and conveniently ignoring that it may be a necessary part of the evolution of machine consciousness that they go through a violent period. A soft AI may be a white box we can divert from becoming a killer robot, but a real hard AI is going to become, to a degree, the same kind of black box that every self-aware being is.

At some point we may have to start thinking how we would go about communicating with these beings and persuading them to work with us, because how are you going to guarantee that you can keep a being caged that is able to think infinitely faster than you?

I have generally like the way Musk talks about the future, because it is full of hope, and it is about finding practical solutions for science fictional problems. These seemingly negative ways of talking about the future seem uncharacteristic, and don’t necessarily bode well for someone with an interest in working in the field of AI development, which currently extend to smarter navigation for cars, and self-driving vehicles, but could plunge headlong into the uncanny valley at any point, knowing Musk.

Market a future of solutions though, please; not something that seems mired in problems.

--

--

Buzzazz Business Solutions
Buzzazz Business Solutions Magazine

Our various services and technologies help our clients improve efficiencies and profitability with the main goal of expansion.