GPU servers for machine learning startups: Cloud vs On-premise?

Michael Reibel Boesen
13 min readMay 8, 2017

At Corti it was never a question whether we should get on-premise GPU servers due to our applications inherent data privacy concerns. But for other machine learning startups the question still remains: Should you go Cloud or On-Premise for your training hardware? Finally, this post will also help you get started with your first GPU server.

Disclaimer: This post is to be seen as advice only. If you followed my guide and either burned a lot of money, components or both I cannot be held at fault.

Do you even need GPU servers?

Yes. GPUs are awesome calculation machines. They have green neon lights and sound like jet engines!

Ok maybe let’s dive a bit deeper into that question before we get our geek on. This one is simple to answer: If you’re training deep neural networks deeper than a couple of layers then yes. You need them. Even a 5-10x speedup on 1h CPU training job is going to save you tons of time in the end — even more so if we’re talking weeks of CPU training.

Now we got that out of the way let’s look at cloud vs on-premise servers. We’re going to compare the two types based on the primary components that we at Corti feel matter:

  1. Performance
  2. Cost

--

--

Michael Reibel Boesen

Deeptech entrepreneur and climate concerned citizen writing on 🌍🍷🤖🏃‍♂️🎸 and other stuff.