Build a Deep Learning Rig for $800

I was introduced to deep learning as part of Udacity’s Self-Driving Car Nanodegree (SDCND) program, which I started in November. Some of our projects required building deep neural networks for tasks such as classifying traffics signs, and using behavior cloning to train a car to drive autonomously in a simulator. However, my MacBook Pro was not up to the task of training neural networks. I used AWS for my first deep learning project, and while it’s a viable option, I decided to build my own machine for greater flexibility and convenience. I also plan to do a lot of deep learning outside of the nanodegree, such as Kaggle competitions and side projects, so it should end up being the more cost effective option as well.

I spent a lot of time reading blogs, scanning reviews, watching YouTube videos, and speaking with other SDCND students on Slack regarding which parts were necessary for a good deep learning machine, and which parts I could sacrifice some performance to keep costs down. I wanted a budget machine that could be easily upgraded over time. Here is the parts list I ended up with, along with the prices I paid. I’ll give a brief explanation of my choices for each part below.

GPU

I’ll start with the most important piece of a deep learning machine: the GPU. I chose the NVIDIA GTX 1060 6GB manufactured by Gigabyte, which came overclocked with a core clock of 1.53GHz and boost clock up to 1.77GHz. The 1060 6GB is a great bang for the buck GPU at $218, and it fit perfectly into my budget (I should mention that I got this price by taking advantage of the jet.com promotion which gives you 15% off your first three orders). NVIDIA GPUs are a must for deep learning in order to take advantage of their CUDA platform and cuDNN accelerated deep learning library. If you have the money I recommend upgrading to the GTX 1070 or 1080 (if you’re thinking about a Titan X, this is not the build you’re looking for).

CPU

A CPU is much less important for deep learning, so for a budget build it makes sense to spend less money here. I went with the Intel Core i3-6100 due to some great reviews, respectable 3.7GHz clock speed, and its use of Intel’s hyper-threading which allows each core to handle two processing threads. The CPU is not completely irrelevant as it’s responsible for some tasks during training including: writing and reading variables in the code, executing function calls, creating mini-batches from data, and transferring data to the GPU. So far I’ve been very impressed with this chip for only costing $110.

Motherboard

The motherboard isn’t a crucial part for performance, but it’s important to keep future upgrades in mind when choosing one. My requirements were as follows: it needed to support my CPU (obviously), it needed to have at least two PCIe 3.0 slots for the GPU (preferably more), and it needed to support up to 64GB of RAM for future upgrades. I chose the MSI Z170A Krait Gaming 3x, which features 7 PCIe 3.0 slots (for up to 4 GPUs), support for up to 64GB DDR4 RAM up to 3600MHz, and 4 SATA 6GB/s ports, which is a nice to have for potentially adding another SSD down the line.

RAM

I went with Corsair Vengeance 16GB (2 x 8GB) DDR4–3000 memory, which I managed to score for $100 (also through jet.com). The recommendation I’ve seen a few times is to get twice the amount of RAM as your total GPU memory, so 16GB of RAM should be plenty for now. But in general the more RAM the better, and it can prove useful for very large datasets and models requiring intensive preprocessing.

Power Supply

I chose an EVGA 750W 80+ gold certified semi-modular PSU which cost me $89. The 750W is way more than I need right now, as I’ve tried several PSU calculators online that estimate my max power usage at around 350W. But this leaves me with plenty of room to upgrade components without having to buy a new PSU. I went with a semi-modular unit because I really didn’t see the benefit of paying the extra money for a fully-modular unit. I didn’t do any research into the potential cost savings from getting a more efficient platinum or titanium PSU, regrettably. You may be able to purchase a platinum unit for a little more upfront cost, but could save you money in the long run due to a cheaper power bill. I recommend doing some research on the topic before making a purchase.

Storage

I purchased a 240 GB Sandisk SSD for $70 and a 1TB Western Digital 7200RPM HDD for $50. I installed my operating system on the SSD, and will also use it for software I use frequently. I’ll use the HDD for storing things like pictures, videos, and large datasets. The SSD is probably not completely necessary for deep learning, except for the rare case where you have a very large 32 bit floating point dataset that doesn’t fit into memory — which will create a bottleneck using a HDD. But let’s be honest, nobody wants a HDD anymore. Once you go SSD you never go back.

Case

When choosing a case I looked for the cheapest one that worked with my hardware and had received decent reviews. The Fractal Design Core 2300 ATX Mid Tower fit the bill at a price of $40. It comes with two 120mm fans which provide great airflow, as described in the reviews. I really didn’t care about looks, so if you want a flashy case you won’t be happy with this one. It’s not bad for cable management, but it’s not great either. Otherwise, it’s a great budget case that suits my needs.

Benchmarks

For those not counting, my total cost came to $785. So what kind of performance does this $800 machine get you? For a simple comparison I used Tensorflow’s built-in “LeNet-5-like” convolutional model with the MNIST dataset. I trained it on my MacBook Pro, a g2.2xlarge instance, a p2.large instance, and my new machine, using the following command:

time python3 -m tensorflow.models.image.mnist.convolutional

As you can see above, my new machine (labeled “DL Rig”) is the clear winner. It performed this task more than 24 times faster than my MacBook Pro, and almost twice as fast as the AWS p2.large instance. Needless to say, I’m very happy with what I was able to get for the price.


As far as actually putting the computer together, it’s really not difficult so don’t let that scare you. Before this project I knew about as much as Jon Snow when it came to building a computer. There are plenty of YouTube videos that walk you through the entire process. If you do decide to build your own machine, I highly recommend using pcpartpicker.com for sourcing parts. Besides providing a catalog of parts, the sight automatically checks for compatibility issues when selecting different components, and contains plenty of example builds from different users.

In my next post I’ll go through the process of booting up the machine with Ubuntu, installing all the necessary NVIDIA drivers, and creating a conda environment with all the software needed to start training deep neural networks.

Update: Here is the link to my complementary post Ubuntu + Deep Learning Software Installation Guide.