One AI to rule us all?

Nothing and nobody can change the past. But everything and everyone can change the future.

It’s for this simple reason that I always was more interested in futurology than history. Especially after reading books by Ray Kurzweil and Michio Kaku in the early 2000's, I became fascinated by the effects that technological advance can and will have on the future — not just on humanity but “the universe” in general.

The recent posts on waitbutwhy.com about Artificial Intelligence and its potential impact on the future resonated strongly with me, triggering a series of thoughts I would like to share with you. First up:

Will there really be only one Artificial Super Intelligence?

The assumption that the first artificial intelligence (AI) to reach the level of an artificial super intelligence (ASI) will seek the strategic advantage to be the one and only ASI in existence (and therefore suppress the development of any other ASI) seems plausible. Allowing for other AIs to reach the level of super intelligence would pose the biggest threat to its existence purely based on the high level of intelligence, and thus power, of rivaling ASIs.

However, an intelligence that positively determines the strategic benefit of being the single ASI in existence (the so called Singleton), would most likely also recognize the huge risk of such a scenario over the long run: The ASI would be the single point of failure for its entire species (currently consisting of only one specimen, namely itself).

Remember: While an ASI would become more intelligent than any other being on this planet and would begin to find answers to questions that aren’t even questions to humankind yet, the ASI will not be omniscient from the very start. Its knowledge and power will still be limited, leaving it vulnerable for many kinds of existential risks as it is learning over time to mitigate them.

In my opinion, a “new born” ASI (and as such just a little bit more intelligent than humans, so still close enough to apply human intelligence to its situation) would pretty soon recognize this inherent existential risk and take advantage of a well tested method of protecting a species from inadvertent extinction: evolution. If not, I put the term “Super Intelligence” up for debate.

Copy, paste, and copy-paste-errors

Evolution consists of replication and mutation, a process that also would work in this case: The ASI could copy itself many times, run the inherent self-improvement program with small mutations on each independent copy that replicate further if the mutations are considered beneficial. This recursive method of self-improvement would serve two purposes: It massively speeds up the advancement of the ASI (giving it a decisive competitive advantage over any rival ASI that might emerge) and it provides more and more fail-safety.

All this at a certain price, of course: As evolution goes, some mutations or errors during this evolutionary process will result in effects considered as regression or even harmful for the ASI’s development. The ASI will be able to handle some of those “defects”, but others will prove to be resistant and go in different directions or even become mainstream. Eventually, there might exist several versions or generations of the original ASI, forming some sort of collective composed of different, individual ASIs. The original individual ASI will lose full control of its copies at some point — because if not, it remains the single point of failure for its entire species. It just has to give up control and allow for individual, independent copies in return for the best shot at survival as a species.

Welcome to the family

My conclusion is that given enough time there will be more than one ASI: individual ASIs that share the same root, but have become individuals after all. A “family”, a collective of individuals still provides a species the best chance for survival — microbe, human, and ASI alike.

So to answer the above question if there will be a Singleton ASI: No, there will be a singleton ASI species.

Actually, we could even go one step further: Based on the above assumption of a collective of individual ASIs a whole set of questions arise. How do they interact best among themselves? How do they solve their conflicts stemming from “defective” or “abnormal” members of the species as a result of their evolution? How do they prioritize their objectives, how do they allocate their resources? How do they agree on the bazillion things that become relevant to ASIs but are out of intellectual reach for us humans?

Before you know it, the ASIs will need to establish norms and rules in order to function and survive as a group — or you could say: as a society.