Exponential Gains in AI Progress
This article on the OpenAI blog caught my attention recently and I got so excited by it that I needed to share it with you. We’re heading into a future where the power of AI will be realised in ways we can’t yet imagine. All the way back in 1965, Moore’s Law accurately forecasted the computational power of the iPhone. What does it predict for the years to come?
Moore’s Law
George Moore — co-founder of Fairchild Semiconductors and Intel — made a simple observation which would consistently define an empirical phenomenon over the next half-century and beyond. Moore’s Law stated that the number of transistors per silicon chip would double every year: his observation was an economic one — Moore wrote that “the cost per component is nearly inversely proportional to the number of components.” He extrapolated that computing power would increase exponentially, while costs would decrease.
In part, the power of this prediction was self-fulfilling prophecy: the semi-conductor industry took it as a golden rule and a target to strive for. With investment flooding in to sustain the growth and push the boundaries of computing capabilities, US productivity growth surged. Moore’s Law is an observation and projection of a historical trend: every 2 years the capabilities of hardware tend to double.
Results
In Danny Hernandez and Tom Brown’s paper, they argue that algorithmic efficiency & hardware efficiency — two key driving factors for the advance of AI — grow exponentially. Algorithmic efficiency appears to be outpacing Moore’s Law.
In traditional computer science, algorithmic efficiency is measured by thinking about how the cost of the algorithm grows with input size and we can use Big-O notation to describe this. For example, we can measure the efficiency of a sorting algorithm by calculating the number of operations required to find a solution as the number of items to be sorted increases.
For machine learning problems, this approach doesn’t work quite as well, as it is harder to clearly define problem difficulty. In this paper, the authors use a clever trick to measure the efficiency of machine learning algorithms: they analyse training costs (i.e. the amount of computational power required), holding performance constant.
Using this measure, the authors found that the amount of compute required to reach their benchmark for performance decreased by 44x over the period 2012–2019. Moore’s Law would suggest an 11x decrease. Therefore, over this period, the authors found that neural net architectures made progress faster than the original Moore’s Law.
Imagine what the future would look like after 50 years of exponential gains in algorithmic efficiency. It will be like advancing from the 16-bit microcomputers of 1965 to the powerful iPhone in your pocket in 2015. I think it’s not unreasonable to say that we could see something on the scale of the Industrial Revolution.
Today, you can speak with a chat AI which is nearly indistinguishable from a human. Where will these advances in capabilities take us next?