The Case For and Against Deep Learning Chips

Deep learning has become of the most relevant trends in modern software technology. From a conceptual standpoint, deep learning is a discipline of machine learning that focuses on modeling data using connected graphs with multiple processing layers. In the last few years, deep learning has become a pivotal technology to power uses cases such as image recognition, natural language processing or even powering some of the capabilities of self-driving vehicles. The popularity of deep learning has expanded beyond just software and now the industry is starting to talk about the first generation of hardware with deep learning capabilities: a deep learning chip.

A few months ago, at its I/O Conference, Google announced the design of an application-specific integrated circuit (ASIC) focused on deep learning capabilities and neural nets. Google called this chip the Tensor Processing Unit (TPU) because it underpins TensorFlow, Google’s open source deep learning framework. While Google’s TPU is not the first industry attempt to create a deep learning chip it is certainly the most famous one. However, is a deep learning chip a good idea?

The answer is related to the current time in the evolution of deep learning technologies. While moving deep learning capabilities of hardware is certainly a great concept, there are some doubts whether this is the right time in the evolution of deep learning technologies to pursue such an endeavor. Looking beyond the hype, we can identify solid argument in favor and against the creation of deep learning chip at this moment in the industry.

Deep Learning is Possible Because of Hardware

The explosion of deep learning technologies has been possible in part because of the breakthroughs in GPU technologies of the last decade. From an execution standpoint, deep learning is an intrinsically parallel model in which an algorithms are based in the parallel execution of concurrent tasks. Before GPUs, it was almost impossible to efficiently execute complex deep learning algorithms using mainstream hardware. GPUs made possible the execution of highly parallelizable tasks and opened the door to the evolution of deep learning.

A great example of how GPUs have helped deep learning can be found in the the ImageNet Large Scale Visual Recognition Challenge, which has been running since 2010 and which has seen the error rate for image classification drop dramatically as the use of GPUs has risen.

3 Arguments for a Deep Learning Chip

As mentioned before, nobody doubts that deep learning chips are going to be a trend in the future but the question remains whether this is the right timing in deep learning technologies to make that transition. Some of the arguments in favor of a deep learning chip include:

· Everything works faster in silicon: A deep learning chip can really optimize the execution of deep learning algorithms and optimize it for specific devices.

· Eventually we want deep learning capabilities in our smartphones: A deep learning chip should be a catalyst to execute deep learning algorithms directly in mobile phones which opens the door to many interesting applications.

· Powering the next generation of deep learning hardware: We have to assume that in the future there will be entire hardware infrastructures focused on executing deep learning processes. If that’s the case, a deep learning chip can be a key component of those infrastructures.

3 Arguments Against a Deep Learning Chip

There is a segment of the deep learning community believes deep learning chips are a bit ahead of its time. Some of the most common arguments against this approach include:

· Support for unsupervised machine learning: Most of the deep learning models still rely on supervised models that need to be trained. A deep learning chip is better suited for unsupervised models that can learn independently.

· Algorithms are changing too fast: The rapid pace of evolution of deep learning technologies poses a challenges for a deep learning chip as the hardware might not be optimized for future algorithms.

· We don’t know the winner algorithms yet: Complementing the previous point, the deep learning industry is still in a relatively nascent state in which we still don’t have clear winners that can benefit from hardware level optimizations. From that perspective, the creation of a deep learning chip feels like an optimization for problems that we don’t know if need to be optimized yet ;)

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.