Why Graphics Cards are Hacking the Future

In 2013, years before DeepMind would unveil an AI that would defeat of one of the world’s best Go players, the company published an influential paper showing how “deep reinforcement learning” could be used to teach computers how to play Atari 2600 video games.

Computers are playing these games

In their paper, the DeepMind researchers had made reference to their use of graphics processing units, or GPUs, from video game cards to train the algorithm:

The final cropping stage is only required because we use the GPU implementation of 2D convolutions…

While GPUs were originally designed as specialized processors optimized to render millions of pixels required for simulating 3D environments, repurposing GPUs to train artificial intelligence algorithms has been commonplace for a while.

But it wasn’t until I read Andrej Karpathy’s recent post on reinforcement learning, however, that something clicked about how interesting this is:

Graphics cards, originally designed for human vision of video games, are now being used for computer “vision” of video games.

When I was growing up, getting a graphics card was kind of a Big Deal. The first one I got for Christmas in 1997 was a Pure3D Canopus Voodoo card based on the 3dfx chipset. It let me run Quake smoothly on my Pentium Compaq, which was a top priority of my life at the time.

Ever since then, I’ve always thought GPUs were an interesting innovation driven by gamers. And it turns out I wasn’t alone: everyone from BitCoin miners to artificial intelligence programmers to medical researchers have re-purposed them for novel applications.

In terms of neural networks, Karpathy points out something important: they aren’t really seeing or behaving like a human (hence the scare quotes) since they don’t actually understand the game. They’re merely brute-forcing a gameplay strategy by tracking pixels representing the ball and a player’s score going up.

And video games are really just meant as toy examples to prove machines can learn to perform in complex environments. If a machine can learn how to play a video game, the thinking goes, then it can learn how to navigate game-like scenarios in the real world such as picking up objects or behaving in a goal oriented way to complete a task.

Putting aside the question of whether reinforcement learning could ever achieve human or super-human intelligence in the real world, there’s something poetic about hardware originally designed to optimize a human’s experience being used to optimize a computer’s experience. We have repurposed hardware originally designed for outputting video games into hardware that can supply the input for a video game.

An ungenerous way to characterize this observation is that GPUs are just powerful hardware thrown at a large set of tedious math problems. And indeed, traditional computer processors (also know as CPUs) are perfectly capable of doing the math required to train neural networks. They’re just a lot slower.

To understand why GPUs are so much faster than CPUs for this type of work, it’s worth getting an intuition about how GPUs are different from CPUs. In his Book of Shaders, Patricio Gonzalez Vivo has a great explanation of the differences between normal processors (computer processing units, or CPUs) and GPUs.

He suggests envisioning a CPU as a big industrial pipe which is capable of only doing one task at a time:

Sketch by Patricio Gonzalez Vivo

… versus how a GPU performs tasks in parallel:

Sketch by Patricio Gonzalez Vivo

Specialized software like Google’s TensorFlow or Theano enable programmers to leverage this massive parallelism for things like training neural networks with batches of data.

Play drives innovation

That computer scientists have opportunistically repurposed gaming hardware for artificial intelligence applications shouldn’t be surprising, as there’s a much richer history of games cultivating innovative technology. Games have influenced everything from button design to social networks in areas far afield from where they began in the game industry. Remember when the Department of Defense built a supercomputer using 2,200 PlayStation 3s?

In the case of GPUs, industry leaders have already seized upon this opportunity and invested heavily. NVIDIA, a company originally focused on manufacturing consumer-level 3D graphics cards, developed CUDA, a platform for exploiting “off-label” usage of GPUs. They’re now selling a line of high performance data-focused GPU-based cards that aren’t even capable of outputting video:

Look ma, no ports!

Facebook has taken NVIDIA’s cards even farther and open sourced a server design called Big Sur which can host 8 GPU cards for a rack unit designed specifically for training neural networks:

You can even rent GPU hardware for yourself by the hour on Amazon AWS: their EC2 GPU instances are designed to run NVIDIA’s CUDA code off the shelf and can be launched as part of a cluster of machines designed to train neural networks.

So while technology like the GPU was originally seeded by demand from gamers, it now has massive applicability to other fields.

But why did GPUs evolve in such a useful way? It isn’t just because game developers just wanted beefier hardware. It’s because game developers created applications that needed a different kind of hardware. GPUs were designed to solve millions of complex problems in parallel, something traditional computers weren’t great at.

It’s not a coincidence, then, that GPUs designed for a computationally intensive task like 3D simulation could also be applied to another computationally intensive problem, like training a neural network.

The complexity and success of GPU hardware is linked to the complexity and success of its software — games.

Powerful GPUs didn’t arrive overnight or for free: there was a huge community of hungry gamers itching to buy the fastest hardware so their games would run even faster. So gamers’ demand for better hardware effectively financed the future of distributed processing power.

But why did that happen? Why have games become such powerful drivers of innovation and culture?

Cultural theorist Johan Huizinga’s Homo Ludens’ theory of the importance of play to our humanity argues that games are necessary to our culture and society.

GPUs represent our collective investment in attending to that necessity: we financed and developed hardware to satisfy our need to play.

And since that need is so fundamental to our humanity, that hardware turns out to be useful for solving other higher order “human” problems like image recognition, speech detection, and playing the games themselves.