The last human invention

Florian Hoeppner
4 min readJan 22, 2017

--

A dark doomsday scenario based on a superintelligence singularity is our predicted future. Why should a superintelligence not taking over the world? Checks and balances are also in this cases required.

Vast numbers of science fiction authors are predicting a machine, computer dominated future. Even dark doomsday scenarios where a super smart computer system is dictating the earth and consuming human mankind. The scenarios are backed by a kind of future bias, where achievements from the past are projected. Moore’s law points out that since 1940 the computation power which one can buy for $1000 is doubling every 18 months.

It exhilarate the fantasy to have in the near future real artificial intelligence systems. First manifestations are already usable. Watson for example, is among others a system which is able to interpret relevant information from cancer patients and calibrate treatment options.

The future is clear, a super-intelligence, the answer to everything!

Remember the ’42’? A lot of highly intelligent people such as Stephen Hawking and Elon Musk are predicting it as a threat to the human race. Nick Bostrom refers to it as “A society of economic miracles and technological awesomeness, with nobody there to benefit, a Disneyland without children.”

This is called the Singularity, a term already used in 1873 by James Clerk Maxwell. A singularity is when a small cause has a big impact. The singularity in this scenario is the super-intelligence. It is the bridge that a civilization must cross and is the critical moment for sheer existence. Surviving this point is a sudden surge of new knowledge from a superintelligence. In just a few years the amount of knowledge increases dramatically to keep mankind protected against artificial superintelligence, the science fiction author Isaac Asimov has defined the three laws of robotics already in 1942 in his story “Runaround”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov’s Laws may sound good with the first read, Bertrand Russel point out that it’s not working out well.

If this would be the law under a superintelligence would work, to protect humans from the super-intelligence itself, this law would shut it down. It would not react anymore. Because with each reaction, with each reply to a question asked by humans, the possibility exist, that someone gets hurts. Even the answer itself can make the difference between life and death. But the question keeps: What law should keep a super-intelligence from taking over the earth? And Nick Bostrom is given a wonderful visual example. What happens if the developer is give the wrong mission to a super-intelligence? For example to an artificial intelligence system which is managing a factory for paper clips: “Do as many paper clips as possible”. The impact can be, that the total earth is at the end a paper clip.

Why should a superintelligence not taking over the world — because we have checks and balances.

But why are we concentrating on such strict laws? Laws which philosophers, thinkers and artificial intelligence systems can prove wrong in minutes. The answer has to be dynamic, not pre-defined and carved in stone as movies and dark science fiction explains.

However, to have dynamic laws for intelligence computer systems, the systems have to slow down, that after a super-intelligence output the whole earth is not a paper clip, and your civilization has already described how this is working. The system has to be divided between executive, advisory and legislative. The executive system is, for example, a robot which is constructing aircrafts. The advisory system is not allowed to change anything in the real world nor in the virtual world. Only execution systems are proceeding real changes. The advisory system has the know-how, the whole intelligence.

Between the two systems, a human interface has to be established. An interface which defines the principles under which the other systems are allowed to work — the legislative. This law has to be hard coded in the DNA of the systems. In this way, artificial intelligences are not in competition with humans, they are completing the civilization. In this construction, artificial intelligence systems can outsmart humans.

So — let’s hope that we simple humans understand the answers the superintelligence provides. Anyhow, a superintelligence shall be the last invention.

What do you think about the division of power for a super-intelligence system? Can this work out or is it in contradiction to the purpose of a super-intelligence?

About the author: Florian Hoeppner is located in Munich, Germany and a full time Application Outsourcing Solution Architect for Financial Services. He started 2000 working successful in IT for multinationals and has experiences in consulting — Transition, Delivery, CIO Advisory mainly with the topics IT Sourcing-, IT Shoring-Strategy and Supplier Consolidations. Additionally he has around 4 years experience in the area of Financial Services and as entrepreneur with a FinTech Start-Up.

Articles and comments are my own views and do not represent the views of my employer, Accenture.

--

--