Deep Learning PC Build

Background

After recently starting Udacity’s Self-Driving Car Engineer Nanodegree program, I began learning about the hardware necessary to train deep neural networks (DNNs). There are a few projects in the program that require training DNNs, and there are several options available for training them in a reasonable amount of time, even if your own computer lacks appropriate hardware.

The feasibility of training deep neural networks has been greatly aided by the emergence of cheap, powerful, and highly parallel graphics processing units (GPUs). After just a bit of research I realized that the integrated GPU in my laptop would not be able to offer much GPU compute on its own, and though my iMac has a dedicated graphics card, the widespread use of the CUDA toolkit makes its AMD graphics card less-than-ideal for deep learning tasks.

Running GPUs on-demand in the cloud is possible, and Udacity does a great job helping students get started with this approach. Before my Nanodegree term began, I completed the first fast.ai MOOC lesson, which offers excellent instructions for setting up GPU compute instances using Amazon’s AWS for deep learning exercises. Though it’s tricky to set up and manage, this is a great option: running powerful GPUs by the hour is an inexpensive way to try things out. Furthermore, Udacity and Amazon offer a generous credit for Nanodegree students to use this service for free. More recently, an excellent company called FloydHub (think Heroku for Deep Learning) came to my attention. They offer similar services without many of the finicky details. If you’re going with the cloud deep learning route, I highly recommend giving FloydHub a try.

After playing with AWS GPU instances for a bit, I racked up a ~$60 bill after a few weeks (mostly because I accidentally left my instance on overnight a few times). At the same time, I was getting started with the Nanodegree (and joining the associated Slack community) and reading blog posts by students (and others on the web) detailing their own deep learning computer builds, which looked to be costing in the ~$800–$1000 range. Depending on how much tweaking and re-running of my project networks I was planning to do, it seemed like it might be cheaper, and definitely wouldn’t be unreasonably expensive, to build my own machine (compared to cloud GPUs over my whole Nanodegree). It also sounded like a fun learning experience/project!

The Build

The blog posts linked above are excellent and offer great guidelines for building your own deep learning computer. I’m writing to add my own thoughts, especially focusing on the aspects of the process that confused me the most. PCPartPicker can help you search and find the lowest prices. I generally went with Amazon because the price differences were often small and the ordering experience is generally better for me when using Amazon.

(Quick aside: I also wanted to mention that ServeTheHome had some interesting posts about Deep Learning builds. Their builds use server components: CPUs are lower power with higher potential core count, motherboards have higher possible memory limits, 10Gbit ethernet, and BIOS over the network, and memory is more robust. Overall it seemed like consumer components were more cost-effective for a focus on GPU computing, though. Apparently older-generation, high-core-count server components sometimes flood the used market, which could make for some interesting highly parallel, low-cost builds. No one else I read used server components, so I thought their contribution was especially interesting.)

  • The GPU is the main component of our system, and hopefully comprises a significant fraction of the cost of the system. ServeTheHome has a nice article in which they show the following graph of GPU compute per unit price.
GPU capability per unit price for various Nvidia GPUs. Figure from ServeTheHome.

This chart shows the unit price for processing power (CUDA cores) and memory (important for how big your models can be) for some recent Nvidia GPUs. The GTX 1060 (3GB or 6GB) cards are optimal options for GPU compute per unit price, if you can afford them (about $240 for 6GB at the time of writing). Lots of different manufacturers make this card (MSI, Gigabyte, EVGA, etc.), and I chose the cheapest/highest-rated option on Amazon at the time of purchase.

  • The CPU doesn’t need to be super fast (though it can help with pre-processing steps), since mostly our build is about GPU compute. A great budget option is Intel’s i3 products (e.g. i3–7100 or one-generation-ago i3–6100), for about $120. These CPUs come with their own coolers, which are perfectly adequate. If you want a much quieter machine you can get a better cooler (Noctua seems to have a great reputation; this one looks nice). The i3 chips have two cores; a moderate upgrade is a quad-core i5 for ~$200.
  • The rule of thumb for memory I’ve read is to have your system memory be ~double your GPU memory. I got 16GB of DDR4 memory (~$100–140, depending on speed, which depends on your motherboard), which has worked great so far.
  • I had a 2.5" spinning hard drive lying around, so I used that. I actually ordered an SSD (this 250GB option for $94 seems popular) anticipating the spinning hard drive being very slow, but the spinning drive worked perfectly fine and I returned the new SSD. I think using the machine headlessly (i.e. without a monitor) makes fast storage less necessary.
  • I got this $73 650W power supply (PSU). I don’t need that much power (PCPartPicker estimates my build uses ~220W), but lower-powered PSUs weren’t really cheaper, and this leaves some room for upgrading if I ever decide to. It’s “fully modular” (the power cables disconnect from the power supply, and plug in the ones you need) and 80 Plus Gold (a measure of its efficiency).

I left the case and motherboard for last, as they are somewhat tied together and require some discussion. First you have to decide what form factor you want. The pros and cons of each was something I spent a lot of time pinning down, so I’ll try to summarize.

ATX has many expansion slots and might be what you remember using if (like me) you built PCs a decade or more ago — slots for sound cards, network cards, etc. Since much of that functionality is now built into modern motherboards, Micro ATX (mATX) and Mini ITX (mITX) are much more prevalent now. mATX is used for more traditional full-size towers, whereas mITX is more prevalent for various versions of smaller builds. mATX offers multiple PCI expansion slots (maybe you eventually want to try linking multiple GPUs) compared to mITX’s single slot, and a maximum of 64GB of RAM to mITX’s 32GB. mATX boards tend to be a bit cheaper (less careful board layout and miniaturized components), though mITX cases are cheaper (fewer materials). mATX cases are generally easier to cool, though I think this is only an issue if you have a high-power CPU or are overclocking your components (usually more common if building a machine for gaming).

I preferred a smaller case and computer, and I didn’t think I’d need more than 32GB of RAM, so I chose mITX.

  • As a long-time Mac hardware enthusiast, I was hoping to find a simple, attractive case with high build quality and low fan noise. I ended up with this $58 case from Phanteks. It is well-built, has many cable-management and other clever design touches, and comes with a giant 200mm fan (bigger fans are generally quieter). It’s bigger than many mITX cases, but smaller and cheaper than most mATX cases, so squeezing in a huge graphics card and a full-size power supply wasn’t an issue (sometimes, with mITX cases, the most powerful GPUs literally don’t fit).
  • Having chosen mITX, I narrowed down the motherboard choices considerably, but I was still confused by the many remaining letters and numbers in motherboard names. Until I figured out what they meant, I had a hard time choosing which to buy. For our chosen Intel CPU, LGA 1511 will appear in the product name — this refers to the chip with which the motherboard is compatible (6/7th generation code-named Skylake or Kaby Lake architectures). You’ll also often see Z270 or B250 or something similar for the motherboards we’re interested in, which refers to the motherboard chipset, specified by Intel and implemented by the manufacturer (MSI, Gigabyte, and so on). If you end up using one-generation older Skylake CPUs (which are just slightly slower and cheaper than the latest generation), you might see Z170 or B150 chipsets. As far as I can tell, the Zx70 option allows for overclocking and has a few marginal improvements compared to the Bx50, the “business” version of the chipset. The Bx50 motherboards are slightly cheaper and better reviewed on Amazon (I read somewhere they were “more reliable”; not sure if that’s true). I wasn’t going to overclock, didn’t need the functionality missing from the B-series versus the Z-series, so I chose this $100 MSI motherboard.

Summary

Here’s my recommendation for an ~$800 Deep Learning PC build.

Summary of build. Prices are mostly from PCPartPicker, but a couple from Amazon, at the time of publication.

I had some extra budget so made a few substitutions:

  • Chose an i5–7500 instead of i3–7100 CPU for two extra cores (+$80).
  • Chose a GTX 1070 instead of the 1060 6GB for more GPU power (+$130).
  • Got a nicer CPU cooler (+$65) for a quieter machine.

Some reasonable more budget-conscious swaps:

  • Use a GTX 1050 instead of 1060 6GB (-$130), or an even older GPU, or a used GPU (with the newly-released GTX 1080 Ti, folks might be upgrading and getting rid of old cards). You can still get huge benefits over laptops or non-GPU-optimized desktops with these dedicated GPUs.
  • Use a generation-ago CPU/motherboard (Skylake instead of Kaby Lake, i3–6100 instead of i3–7100). Kaby Lake uses the same process and is a modest upgrade over Skylake (-$10–20).
  • Use a cheaper mITX case (-$30–40).

I didn’t buy a mouse, keyboard, or monitor as I planned to use the machine remotely, interfacing with it from my Mac from the command line using ssh. I was delighted to find out during setup that the wireless Magic Keyboard and Magic Trackpad 2 worked via USB with their Lightning-to-USB charging cables on Ubuntu.

Here are some small miscellaneous details I’d have appreciated knowing when buying parts and when they arrived:

  • The CPU comes with its own cooler and thermal paste (unless you choose the overclocking “K” variants, but then you probably know what you’re doing).
  • The motherboard comes with a couple SATA cables. If you want more than 2, buy some extras.
  • You’ll need to attach the main motherboard power cable, in addition to a CPU power cable, from the PSU. Your GPU will probably need to use one of the VGA power cables from the PSU.
  • Depending on how many case fans you have and fan headers (connection spots) your motherboard has, you may want some fan header splitters.
  • You’ll probably want a USB stick if you don’t have one to install your operating system.

Conclusion

As an ex-PC builder and long-time Mac user, I had a lot of fun revisiting the world of PC building (oh the ridiculous gaming-focused marketing!). In this post I tried to address the points of confusion, and remove extraneous information, I encountered during my own build process. Hopefully it’s been helpful!

The deep learning software configuration is enough for its own post, and there are many great articles on the topic. In short, I’m running Ubuntu 16.04 (latest “long-term support” version) with key-only ssh access. At a basic level you’ll want to install Nvidia drivers for your GPU, the many relevant Python numeric and scientific libraries (including Jupyter!), maybe OpenCV, and CUDA, ensuring your environment is properly set up to use the GPU. You can install additional libraries (Tensorflow, PyTorch, Caffe, Keras) as you need them.

Good luck, and thanks to the awesome Udacity student community for helpful blog posts and discussion!