The AI Singularity

The AI singularity is an event speculated to occur when the creation of a artificial super-intelligence triggers runaway growth, leading to the rapid advancement of (possibly autonomous) AI far beyond human intelligence. This is still a hotly debated topic today with people arguing that it is the simple course of evolution for computer science and could occur as early as 2040, and with others arguing that sheer processing power alone cannot tackle super-intelligence without near impossible strides in software made as well.

To create an AI super-intelligence and still have the ability to retain it without the risk of possible “hostile goals”, two methods have been proposed; boxing and instilling moral/positive values. To use boxing, the creator simply limits what the AI has the ability to do. This leads to ethical debates, as well as limits the AI’s growth ability. Instilling positive values would lower the growth limitation of the AI, but it is still unclear even today how such a method could be implemented and properly tested. Additionally, it leads to ethical questions of what is a positive/moral value.

Machine learning is an extremely integral part of AI today. But, with the current theoretical limitations of machine learning, such as lack of an “ultimate goal” and being limited to data set input, (often required to be looked over by data scientists before hand) it is unlikely that machine learning in our current understanding will be a prevalent piece in this possible super-intelligence. Current uses of machine learning already give results far quicker and cheaper than human analysis, without requiring it to be super-intelligent.

If machine learning were to overcome these limitations, it is likely that independent component analysis, or ICA. ICA is used to find hidden relationships between random variables, measurements, and signals. This would potentially be a very important algorithm as an AI super-intelligence would be required to operate outside of strict data parameters, where many variables and relationships can appear random to a computer system.

At this point in time, no one can truly say if the AI singularity will or will not occur. It will require enormous amounts of advancement in the areas of both computer hardware and software, with a heavy focus on new algorithms and software innovations. If it was to be possible for this super-intelligent AI, advanced research will have to be put into retain a super-intelligence without concern of hostile goals, as the boxing method and instilling positive values are likely not enough.

References:

http://inverseprobability.com/2016/05/09/machine-learning-futures-6

http://www.nickbostrom.com/ethics/ai.html

http://www.yudkowsky.net/singularity/aibox

Show your support

Clapping shows how much you appreciated Mark Janik’s story.