We need to think differently about AI

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The three laws by Isaac Asimov that every robot in his fictional universe must adhere to, even better: The three laws are not as written down above but rather ‘complex mathematical formulae which form the basis of all logic in a robot’s positronic brain’.

This just adds to the total hit and miss of the laws themselves: nothing in nature is so black and white, or restrictive.

The only law a robot should adhere to is to feel part of humanity and see us as it’s brothers and sisters. To respect us and our wishes. To discuss and debate with us its own decisions.

This makes the largest hurdle for AI rules not the acceptance of humanity by the AI, but humanity’s acceptance of it.

We humans are afraid that AI will destroy us, our way of life, perhaps even the universe itself.

But nature, be it biological or artificial, goes it’s way, and the only way to make things go the way you want is to not make something your enemy.

The appearance of a superintelligence should therefore not be held at a distance, regarded with fear, but rather be made part of ourselves. Through augmentation of the human species.

AI research shouldn’t focus on creating a ghost in a shell, but rather, on expanding our own spirit beyond our tiny shells.