A Hardware Speed Bump for Neural Networks

Santosh Rao
ManhattanVenturePartners
3 min readMar 6, 2019

Do we have the hardware for Artificial Intelligence (AI)? That is a pivotal question that needs to be addressed before we talk about the wonders technology will that AI provide.

It is generally believed that the next leg up in technological innovations is going to be powered by AI. But one factor that is being ignored or not talked about enough is the fact that the speed of adoption of AI is going to be a function of computer processing speeds. At this point, we don’t have computers that are fast enough to derive the full potential of AI.

Obstacles to implementing AI and neural networks

We are at a point in time, as in the 1960s, where we know the full potential of certain technologies, but don’t have the processing power to realize it. The most advanced category of AI is deep learning, which requires deep neural networks to process the given inputs. Neural networks require more processing power than regular algorithms, and the current state of computing technology is soon expected to reach its limit soon.

The current AI optimization process is slow, limiting the full development of the technology. Training a deep neural network is mathematically rigorous and time-consuming which prevents deep learning from achieving its true potential. According to Digital Catapult, a UK-government funded agency, a single training run for a deep neural network can cost up to $10,000, which can be prohibitively expensive for startups. The promises of many AI startups are theoretically attainable in the near-term, though in practice this may take longer than the industry would indicate given themoney (cost) intensive, time and labor intensive nature of training neural networks.

The current technology is not yet ready for wide deployment, as it draws too much power and has extended calculation times. According to a paper published by Nvidia’s AV (Autonomous Vehicles) team in 2016, a typical neural network has about 27 million connections and 250,000 parameters, so it requires multi-billion calculations for a single run, and with algorithmic complexity increasing year-over-year, considerable advances in technology are required to increase efficiency. A computer with more semiconductors would operate at unsustainable temperatures and transistors are still relatively too big to stack up in a computer core. For this reason, the training of highly complex neural networks, like AV, takes years to accomplish.

Hardware limitations

AI does not have the computing power to be fully implemented rapidly until a new processor technology is developed. Many startups, like Wave Computing and Graphcore, are taking innovative approaches to develop new processing units specific to deep learning that would overcome the weaknesses of the current hardware. These AI semiconductor startups will be able to compete with large companies like Nvidia, which are more focused on improving the current processing units. Recently, Intel launched a project to develop an AI processor to catch up with AI demands, but the CTO of AI, Amir Khosrowashi, advises that it will take ten years to get to the market. According to Gartner, the AI chip industry could grow up to $34 billion in the next four years making it a lucrative investment opportunity. With the new developments that these AI-chip startups are already bringing, the whole AI industry would be able to accelerate developments and permeate even more of every-day life than it currently does.

We are headed for a speed bump on the implementation of AI that only a new semiconductor technology is going to be able to overcome.

Deep learning needs to mimic the powerful human brain’s decision making, and it requires numerous calculations. Our current computers do not have enough processing power to catch up with increasingly demanding AI computations. The era where AI becomes the norm in every technological area needs faster computing power, and these new AI startups stand as the necessary catalyst for change.

--

--

Santosh Rao
ManhattanVenturePartners

Head of Research at Manhattan Venture Partners, Chief Editor of VentureBytes