What’s The Deal With Robots?

Before I go on with my relatively detailed banter on the good and bad about Artificial Intelligence, along with all of its lovers its haters, I figured I’d bring up a recent news article to put us into perspective:

A couple of weeks ago, a group of canadian students from Ryerson University in Ontario built a robot out of an iphone and spare parts. The intention was to release it into the world and see how far it could travel with the help of travelers. And it got pretty far, considering it needs to be handed over from person to person of whom arranged its exchanges over Twitter. Hitchbot got across Canada, flew to the Netherlands and Germany until it eventually made its way to the United States. Two weeks into Hitchbot’s American tour, the robot arrived to Philadelphia and gave Hitchbot a dose of what it meant to be in the city of brotherly love.

So a trashcan got decapitated. What’s the deal with that? Did anyone really care what happened to Hitchbot? By the end of its trip, it had garnered over 43,000 followers (as of August 6, 2015, it was about to break 65k) and gotten a lot of love from users and the media. The real question is: should we be anthropomorphising an inanimate object incapable of feeling, and understanding emotion and pain?

With current efforts at creating self-correcting algorithms (“machine learning), the field of Artificial Intelligence is getting a lot of love — and a lot of hate — from a lot of people. Take the opinions of Elon Musk, Stephen Hawking and another couple hundred of the world’s leading authorities in technology, robotics and roboethics and we can summarize their opinion in a tweet from Musk himself:

So robots are dangerous. Potentially. What makes AI so scary that some of the greatest innovators and minds of our time are putting time and money to stop them?

Beyond the idea of a tireless workforce that’ll steal everyone’s jobs, the biggest fear of AI is what happens when their intelligence and capacity allow them to reach The Singularity, or a level far beyond our feeble imaginations in which they (Wikipedia defines as) “theoretically be capable of recursive self-improvement (redesigning itself), or of designing and building computers or robots better than itself.” What this means is once a robot gets to AI version makes the jump from 1.0 to 2.0, what’s to stop it from getting to ‘N’.0? At some point they may begin compete with humans for resources, overthrow and very shortly after eliminate us from the face of the earth. And people say we should fear global warming.

On the dark side of the moon, there’s a growing movement to see Artificial Intelligence come to fruition as many advancements can be made from such improvements, including, but not limited to home aide attendants, mental health care providers and a possible hand in hand collaboration between us carbon based life and silicon based “beings”…if you want to call them that. This leads us to our next couple of problems…

What happens when robots don’t decide to kills us all immediately? And how do we keep them from doing so?

There are generally two schools of thought on whether we should program AI to have morals or “teach” them instead. Take for example the well known IBM Watson who “learns” by interpreting, evaluating and deciding on everything and anything fed to its databases. In the event we do create robot infants of which will soon surpass us in intelligence in a day anyway, do we sit them in front of a TV and have them watch Sesame Street?

Can muppets stop robots from turning into our silicon based overlords?

And once they do leave robot school, what happens when they begin walking among us? Artificial intelligence would at some point need to be regarded as less of a tool and more of a collaborator, and quite possibly require all sorts of new laws and rights to accomodate them in our society.

The other school of thought is we should program them with all sorts of laws that’ll stop them from destroying us (Ex. The Three Laws of Robotics). It seems like these hard coded programs would work in the short term but this would not allow robots to make decisions based on certain circumstances in situations.

Take for example you’re in a life threatening situation walking down the street and find yourself needing to get to a hospital as soon as possible. The closest help to you is Google’s self driving car who’s programmed to obey all traffic laws and while you’re in the backseat bleeding out and pleading for your life for the Google taxi to step on it, the car is only going to go at a top speed of 30 miles per hour in the city — and then you enter a school zone.

This is a situation in which a program would not be able to take in circumstantial events and judge when it’s necessary to break certain laws for the better good.

So what’s the deal with robots? I guess it’s up to you to decide.