Right Disk Type for performance
When we talk about “The Cloud”, we tend to over index on things like “Compute” and “Network”; because, quite frankly, those are generally the things which cloud providers can give that a local machine sitting under your desk can’t. But with this focus, we tend to forget something : Disks.
All that relational data, compute data, and temporary scratch files have to go somewhere. And on Google Cloud Platform, choosing the right type of disk can make a huge difference in performance for your application. As such, let’s put a few of them through the paces so you know what to pick for your needs.
Disk Types
In Google Cloud Platform, there’s Two main types of disks : Local SSDs, and Persistent Disks.
Local SSDs are what you’d expect : are physically attached to the server that hosts your virtual machine instance. The data that you store on a local SSD persists only until you stop or delete the instance.
Persistent disks, on the other hand, are durable network storage devices that your VM views as though it were a physically connected disk. The upside here, is that the persistent disks are located independently from your VM instance, so you can detach or move persistent disks to keep your data even after you delete your instances.
Now, on the PD side, it’s worth noting that there’s two flavors: Standard, and SSD, which from a usage perspective, are identical. Their main differences come from the performance they offer.
Besides what I’ve mentioned here, there’s lots of interesting subtle differences in PDs vs LSSDs, so I recommend looking at the official documentation to get a sense of those implementation nuances.
From my perspective, however, I’m mostly concerned with performance. So let’s take a look at how each of these setups performs, and what types of workloads it makes sense for.
The performance of disk types
When it comes to doing disk performance on Linux machines, I tend to turn to my trusty friend FIO, which will spawn a number of threads to do a particular type of I/O action on the disk. Which is really helpful when you want to determine how the performance changes depending on the use cases. When we run FIO across a bunch of different test configurations, we end up with a general chart that looks something like the one below, where we are measuring IO Operations a second, per gigabyte of data (IOPS /GB)
Directly, we see a very clear trend : Local SSDs out-perform PDs of any kind by a significant margin with respect to IOPS. This makes sense; Given that Local SSDs are physically connected to the VM, it allows us sub-millisecond latency for IO, which of course yield higher performance.
What’s all that good for?
Now, unless you’ve spent time dealing with streaming data of an optical disk, or really fine-tuning your IOPS for some specific workloads, chances are you don’t really have a frame-of-reference with respect to what types of performance you need, given the workloads that you’re using in your native application.
- Boot volumes and bulk storage are great use cases for standard PD, because those use cases require low IOPS, and low throughput, you just want to occupy the cheapest, most reliable space available.
- Streaming IO is a use case that works well for standard PD as well; While the other two options have lower cost per IOPS, the cost per throughput is generally better on standard PD. Reading large sequential blocks is what disks are good at.
- Relational databases (SQL, NoSQL, Cassandra, Mongodb, Redis, etc) tend to be transactions and IOPS heavy. Smaller instances might be able to run on standard PD, but production databases should be run on some sort of Persistent disk. Either standard, or SSD, depending on your IOPS needs.
- File Servers can be either more streaming or more transactions, depending on what the clients are doing; SSD PD is generally a great choice here.
- High performance scratch disks, or items where the transactional data needs to be computed locally, tends to perform best on Local SSD. This is also an ideal use case for things like Hadoop deployments.
Fine tuning performance
It’s worth reiterating that each of the 3 disk options have various types of configurations and modifications that can change how they perform under various workloads. Recently we had a few customers run into these problems, and I’ll be highlighting the most common ones in a few articles in the future.
Stay tuned for more!