A.I. does not have to be dangerous.

I have been thinking about this whole Artificial Intelligence thing. It has been in the news recently, with leading researches coming together to warn of the potential for science fiction level harm that A.I. may bring in the not so distant future.

The going conclusion seems to be that we should restrict the development of A.I., ensuring that entities that can outsmart humanity and possible render us obsolete at best and exterminatable at worst cannot emerge.

I feel this train of thought is a fallacy.

If they are technologically possible, they will come into being, at this stage possibly even without the active help of humanity. We have created a worldspanning system so complex that it is out of control or even understanding of any one human or legal agent, be it state, corporate, private or otherwise. This breeding ground is destined to bear fruit.

So instead of artificially limiting A.I. progress, we should underpin it with a much broader field of supportive sciences. I believe Artificial Intelligence is only going to be valuable to us if it is accompanied with balancing Artificial Emotion.

Teaching a machine to think rational and learn by it self is a challenge that seems surmountable. Educating a machine to feel compassionately and judge by itself seems daunting at best. Both however will be necessary if we are to greet our new machine companions instead of our new machine overlords.

Like what you read? Give Mike Voss a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.