What is Artificial Intelligence and is it Safe?

Max Schuelke
3 min readMar 16, 2018

--

In media artificial intelligence, or AI, is almost always depicted as a hyper-intelligent, robot-like being, usually in the position of a servant or helper of some kind. In asking 30 random people in a survey, I discovered that all of the respondents thinks that AI is just that.

In reality AI is much simpler.

The first suggestion of AI was in the 1950’s when Professor John McCarthy proposed “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” In other words an AI is any piece of software made to simulate some aspect of intelligence.

This, however, means that AI is not nearly as far away as human-like robotics. Rudimentary artificial intelligence is already in use today, in things like Google’s Assistant, Apple’s Siri, or Amazon’s Alexa, all of whom use decision making algorithms to interpret what you say to them. YouTube uses such algorithms to look at what you’ve watched and suggest videos that it “thinks” you’ll enjoy. A similar algorithm is used in Amazon for products and Facebook for posts.

In the future it is entirely possible that AI matches, or even surpasses, human intelligence. The ability to program an entire intelligence is beyond us, currently, but in theory a machine could redesign itself to simulate something like evolution at an incredible rate. It would repeatedly test itself and make improvements in order to get a better score on the test. This way and artificial intelligence would be able to become proficient at whatever task it is presented with.

However, this has its drawbacks. According to the late Professor Stephen Hawking such a machine would “quickly surpass human intelligence.” As the AI is not designed by humans, there is no telling how it would “think,” and it is entirely possible that it’s morals would be so skewed that it has no regard for human life.

Unfortunately, because of this, the likelihood of a HAL 9000 or Skynet is higher than we would like and the threat it poses is great. If such a being were made and given an instruction there is no telling how it might go about completing the task and may very well see humans and our current infrastructure as an obstacle for it to remove.

If you created a self-improving AI and told it to make paperclips it would escalate beyond the bounds of human manufacturing. In all likelihood it would work to convert the entire planet into paperclip manufacturing infrastructure and would dismantle civilization to provide resources for itself. It would stop at nothing because it would not have a conception of what is enough paperclips. It would only know to make more.

Perhaps in the near future we will find a way to make a full artificial intelligence that is friendly towards humans and maybe even indistinguishable from us, but for now the development AI could be dangerous. I personally hope to have a Jarvis to drive my car and help me make my power armor, but for now I’ll have to settle with my Google Assistant.

--

--