Deep Learning Rig — Part One

Noah Gundotra
Jan 5, 2018 · 4 min read

Today, I decided to blow a crap ton of money on a deep learning rig.

What is a Deep Learning Rig?

A deep learning rig is a computer that uses graphics cards to parallelize math operations commonly found in deep learning.

To be a little more concrete, I’ll give my personal view of what such a computer looks like:
A computer that runs Linux software (because Tensorflow and Windows don’t mix), paired with powerful graphics cards made by NVidia. As of today, Ubuntu seems to be the most prevalent OS distribution that runs most Deep Learning (DL) frameworks reliably (Tensorflow, CNTK, PyTorch, Keras). As of today, NVidia GPUs (Graphics Process Units) are the only graphics cards that have DL code written for them.

Graphics Cards: Usage

Graphics cards do hundreds of thousands of math operations at once. Each operation is performed slightly slower than if were handled on a CPU, but this sacrifice produces high throughput for GPUs. GPUs are greatly more efficient than CPUs when dealing with massive amount of numerical data that needs to be math’d.

The connection to deep learning lies in the massive amount of numerical data that needs to be math’d. Ironically, the mining process for blockchain currencies also relies on this same principle to generate cryptosecurity, so GPUs are sold out in Amazon and are just annoying to find.

Graphics Cards: Politics

Graphics cards are the breath and heartbeat of the deep learning community. They’re the extension of Moore’s Law that allowed for modern neural networks to become trainable in a workable amount of time. Using them is imperative to conducting any sort of research, business practice, or competition (@Kaggle).

This is the gold rush of the deep learning era. Where we all charge towards the promise of untold riches by applying math and computer science techniques on problems previously unbreakable. (Side note, we’re not that sure these algorithms are guaranteed to work forever because we don’t understand how our why they’re efficient.) And as with all gold rushes, it’s the clothing, washing, and food stores that make all the money. In DL era, that’s NVidia & AWS. You just can’t deep learn anything, without NVidia, or at scale, without AWS.

Basically, AWS charges ~$1/hr for using a single GPU accelerated Ubuntu instance for deep learning. However, they charge another ~$1/hr for persistent data storage of large datasets, which are (! surprise !) widely prevalent in the DL community. And that cost quickly racks up as you train models overnight, or if you need to create multiple GPU accelerated instances for different experiments. It’s a great service, don’t get me wrong. So much research, especially at Berkeley’s AI Research (BAIR) lab is dependent on this service.

It’s just expensive and a long-term thorn in your ass. Now, the computer.

The Rig: Process

Because I’m stupid and because I used up all my funds from 3 years of savings. Strangely, I couldn’t find any parts lists for a Deep Learning computer on either or Amazon’s built in “idea lists.” But, I didn’t look that hard, so they may be out there. Also, not many popular guides?? Lot’s of small blog posts, but many people have different $ spend and different GPUs…

My links to a lot of build guide blogs came from Andy Twigg. But the one I followed basically to the “T” was from Slav Ivanov. Slav’s got a really great guide, and I hope that his installation of the drivers will help me when my parts arrive.

As of writing this piece, I believe the last piece, the case, will come in around 5 days from now. Which gives me 3 days to set the rig up and either leave it running at my house, and set up a secure SSH workflow to access it from Berkeley, or bring it to my dorm. Not much space at my dorm, and I’ll definitely need ethernet cables, and since I’ll definitely be running the GPUs day and night––it’ll be very loud. I’ll cross that bridge when I get there. I may regret saying that later, but that’s just where I’m at right now. If I get burned by my poor decision making habits, then I will be well on my way to having learned a valuable lesson.

The Rig: Parts

While, I’d love to spell out all the parts, I’ve used, the majority come from Slav’s post which is linked above. ATM, I’m planning on picking up 2 GeForce 1080 Ti’s from Frys tomorrow. The only difference between my parts and Slav’s is that I picked a different wifi adapter because I’m edgy like that.

For my full parts list (minus the Frys GPUs), I created an Amazon “Idea List”.

That’s all for now! I’ll post an update when I actually begin building my new computer and try to install stuff.

Image for post
Image for post
Flat ethernet cable for my dorm, so I can slip the cord in between the desks and bed.
Z270 motherboards the only motherboards compatible with the 7th generation Intel CPU processors. Thanks to Mai at Frys who helped me with that.
MSI GeForce GTX 1080’s that I’m putting into the TUF motherboard above.

Ciao, mis amigxs!

Imploding Gradients

ML and Data Science Stuff.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store