Artificial Intelligence: Just How Close Are We?

An insight into the driving forces, factors and things to consider, when trying to understand just how close we are to creating strong AI.

There are many influential figures in the artificial intelligence (AI) realm, that believe we are getting closer to creating the first generally intelligent artificial machine (AGI). However, others of a similar calibre are arguing that we are instead a significantly long way off reaching a point where computers can ‘think’ like humans. Perhaps the biggest reason for why these two groups are unable to agree comes down to the study of Moore’s law, a theory that was defined in 1965 by Gordon Moore, the co-founder of Intel, and Fairchild Semiconductors. Combining the theory behind Moore’s law, with visible development in the field, leads experts to conclude that AI may in fact arrive as early as the late 2020’s. Realistically though, just how close are we to coming to a point where we can rely on computers to carry out everyday actions, and act in a way that we humans may be expected to. The real answer is nobody actually knows, but I’m going to outline some of the key features, theories and driving forces, behind what’s bringing us that bit closer to creating a machine that can actively think, and act without pre-set parameters placed upon them by their creators. I’ll also look at the barriers that we may face in the future, which strengthens the argument that AI may in fact be centuries or millennia away.

Moore’s Law

As already mentioned, Moore’s law went a long way in accurately predicting technological advance between 1975 and 2004, with many arguing that this theory allows us to understand just how quickly we are reaching what some describe as ‘singularity’. The theory itself, is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years. Today, many argue that this prediction is no longer relevant due to the inefficiency of creating ever-smaller transistors. However, this 2-year doubling, has been further linked to a number of technological advances, such as advancements in memory capacity, and pixels in digital cameras. The reason that this is so significant for the field of AI, is due to the predicted increase in computer processing speeds as predicted by Moore’s law. Initially, increase may not seem significant, but you’ll quickly find that we reach a tipping point, where what was initially a small increase, suddenly becomes exponentially large. The same theory applies to a piece of paper being able to reach the moon upon being folded multiple times. The difference between the first and second fold is small, but on the 42nd fold, you will have a tower that effectively stretches to the moon. This is an application of Nikola Slavkovic’s rule of exponential growth, and ties nicely with Moore’s law. An example of exactly how the two can cause exponential growth, can be seen below, in an image that was taken from the upcoming part 4 of AI Revolution.

Exponential Growth: Part 4 of AI Revolution from Pawel Sysiak

Quantum Computing

The early hurdles that quantum computing faced have been overcome, but we’re still at a stage where it is difficult to predict as to whether quantum computing will be realistically possible, due to the unpredictable breakthroughs in research that will be required in order to make it a reality. However, should we be able to create a ‘quantum computer’ there would be a number of dramatic increases in processing speeds, in specific applications such as searching unsorted databases, which will bring significant advance in the possibility of big data. It should be pointed out however, that Google and NASA claimed to have created a quantum computer, the D-Wave 2X, back in December 2015. It’s a computer claimed to be 100 million times faster than any of todays machines, and uses a process known as quantum annealing, that seeks to solve optimisation problems. The reason that this is so important for AI, comes down to the fact that these increased processing speeds would fundamentally speed up the rate at which a computer is able to carry out complex tasks such as face-recognition, and would thus in theory, take us one step closing to creating super-intelligence, due to its incomprehensible processing power compared to that of a human mind. Largely, quantum computers would be very fruitful for AI research, due to the sheer processing power and capability that comes as a result. A photo of the D-Wave 2X can be seen below, and is part of Google’s goal of creating Quantum Artificial Intelligence.

Google X NASA: D-WAVE 2X

Tipping Point Incentives

The third factor that could positively enhance the speed at which we attempt to create machine intelligence, comes down to human nature, and our drive to create something better than competing parties. As AI becomes a topic of hot discussion, many see the potential reward (and risks) of creating such a machine. It’s this potential gain that drove a space race between the U.S. and Russia, combined with a so-called bragging right for the country that successfully landed a person on the moon first. It was a display of economic prosperity, strength and power, that caused advances far greater in space travel innovation that we’ve seen since. Although arguably this may be down to the depletion of ‘low-hanging fruit’, the space race stimulated significant progress in technological advancements, as a result of the number of people working in the field, and the capital that was devoted to the cause. Should this be something that occurs in the field of AI, we may once again find ourselves in a position whereby selected governments, or perhaps organisations race to create the world’s first generally intelligent machine.

Naturally, we must point out some of the barriers to AI that we are likely to face over the next decade, which leads us back to the longer-view that many industry experts are taking on AI and it’s inception into reality.

Depletion of ‘Low-Hanging Fruit’

The already mentioned depletion of ‘low-hanging fruit’ within a field is something that can show initial advancement, but then show little further progress. The reason for this comes down to the idea that an industry is initially easy to innovate within, when utilising the technologies and tools that are presented to us in modern day society. However, we saw a stagnation in the growth of AI in the 1980’s and some argue that it’s this depletion of low-hanging fruit that stifled any further growth. Now that we have a wider and more advanced set of tools in the form of higher processing power, and technological innovation, we’ve seen speech recognition come on leaps and bounds in the form of Cortana, Siri and the future Facebook M. We’ve also seen an increase in the use of image recognition and even the success of Google’s technology when competing in games against world-champions, as was seen in the recent success of the Alpha Go competition. However, are we once again coming to a point whereby we reach the maximum utility of the tools available to us today, meaning that further advances in AI will require much more effort, in order to reap any reward? If this is the case, it is unlikely that we will see any form of generally intelligent machine in the near future.

Disinclination

This is perhaps simply a ‘speed bump’ in the development of AGI, but it is something that cannot be simply be ignored, as disinclination will have significant impact on the speed at which we innovate. Today, the public are understanding of the advancements occurring within the field, but there may come a point where people are no longer happy with the scientific advancement of the field, for fear of preservation, ethics or other moral issues surrounding AI. Should we get to this point, it is likely that research into AI will continue, but simply go underground, which is a risk that many experts warn about. Should the advancement in AI be driven behind closed doors, it is unlikely that steps will be taken to ensure the safety of the intelligence that people try to create, and safety parameters may not be put in place to account for these changes. Thus disinclination may act either as a slowing process, or perhaps an even greater risk, at the expense of the public. Experts are already looking to prevent any form of disinclination from occurring which is why many are doing their best to explain the stage of development that we have reached, and I would argue is another reason for Google’s openness about their advancement, when comparing them to the fairly quiet team behind IBM Watson.

Societal Collapse

The final barrier that is presented to the field for the purpose of this article, is the risk of societal change, or collapse, that will stagnate or prevent any further advancement in the field. Extremist groups have acted in the name of religion, sovereignty and politics, and it would not be unfair to assume that so-called ‘survivalist groups’ in the future may also act to prevent the advancement of an intelligence greater than our own. Perhaps equally important, is the negative argument placed upon both robotics and AI, and the economic disparity that they will create around the world. Many argue that as driverless cars replace the need of taxi and transport haulage drivers, and human capital intensive jobs are replaced by robots, one will simply not need humans to carry out these operations any more. This is an issue for society, as it creates an ever growing gap between those in control of the machines (the means of production) and the poorer people within society. This is the reason that we’ve seen the theory behind Marxism once again being considered, with Marx predicting a necessary revolution, or perhaps in this case a societal collapse. This point is one that cannot be tested, and we cannot predict whether this will ever happen, however, there is a growing call for equality as a gap between the working class and richer societies continues to grow.

After looking at a few of the major factors that are influencing decision making within the AI sphere, it’s difficult to say whether we’re really close to creating the world’s first general intelligence. One thing is for sure though, and that’s the fact that we are making leaps and bounds when creating algorithms that are able to predict decisions without influence of human parameters. That’s no guarantee that we’re close to creating an entirely independent machine with a conscience, but it shows that we are moving in the right direction. With notable figures including Elon Musk and Stephen Hawking already warning of the risks associated with a generally intelligent machine, one may conclude that we are in fact close, but some of the factors I’ve addressed may cause a slow in the field, just as was seen in the 1980’s. If however, we are close to creating AGI, what steps should be taken to ensure that an independent machine can act in the best interests of humanity? That’s something that I’m going to discuss in my next article, looking specifically at Friendly Artificial Intelligence (FAI) and why people such as Elon Musk are so worried about the inception of AGI, or perhaps even super-intelligence.