Let’s stop trying to visualize cloud computing as physical hardware

Greg Wilson
6 min readMay 30, 2015

--

Until recently, I thought of cloud computing as the virtualization of physical hardware. I thought of virtual machines as servers, cloud storage as a collection of Internet-connected disks, and so on. To show how deep this mental model went for me, take a look at the screenshot below — it’s from a tablet app (iPad and Android) called “Greg’s Toolkit” that I conceived in 2011 and built with two friends. The app would introspect the user’s Amazon Web Services account and create a visual computer room — racks filled with servers that represented my EC2 instances, disks that represented my S3 buckets and attached devices, and on and on. Our pitch was that it made the virtual world familiar again.

The app did OK, but it never gained the traction we had hoped for, so after a few months, we shut it down and moved on.

The epiphany

When I conceived Greg’s Toolkit, cloud computing was mostly a hobby for me — the back end of several apps and websites I created. About a year ago, I joined the Google Cloud Platform team and my fun hobby suddenly became my new day job! During the first few months at Google, I still thought of cloud computing as infrastructure in terms of physical entities such as “servers”, “machines”, “CPUs”, and “disks”. When I started learning about Google App Engine, however, I began to see flaws in my thinking. App Engine is a true platform-as-a-service (PaaS) that enables you to deploy your code without giving any thought to what type of hardware it’s implemented on. Important needs, including auto-scaling, security, and authentication, are taken care of for you. So it doesn’t really work in my screenshot above that mapped to a physical environment. My mental model was quickly falling apart.

Soon after, I started learning about containers (including Docker and Kubernetes). Containers are similar to VMs in some ways, but unlike VMs, you’re not virtualizing an entire server. Containers are very lightweight compared to VMs because the OS is not part of the container itself. You can even run multiple containers on the same host, each of which has its application isolation from a user point of view. Obviously my mental model was now fully invalidated. This epiphany has me looking at the future of cloud computing in a much broader way.

With my new expanded, non-physically-constrained view of cloud computing, I’ve been thinking about what the future might look like. Today, major cloud vendors still use many physical terms in the titles and description of services.

Let’s take a look at storage. When you need virtual disks with high IOPS, you select “SSD-based” storage. The fact that SSDs are the actual devices used to provide the desired high disk performance is an implementation detail. The fact that the type of physical hardware being used is exposed to the user actually contradicts the real value proposition of cloud computing, which is that you don’t have to be concerned with the implementation details. I suspect that “SSD” is used because in today’s world, SSDs are universally known to be faster than standard disks, so it’s a bit of a marketing play. Looking toward the future, it’s easy to imagine a world where you choose the performance requirements, durability requirements, and access models you need for your storage without the use of any physical (legacy?) terms. Behind the scenes, the storage solution will be implemented by various means including SSDs, high speed memory, or some other technology based on our needs, but we will no longer be concerned about what actual hardware implemented the solution. We’ve already seen some progress here from the major cloud vendors. For example, Google Cloud Bigtable provides a familiar interface (HBase) with very high reliability and extreme Google-level scalability, and Google Cloud Bigtable does it without exposing the underlying details on how it’s implemented. It’s crazy fast, it’s super reliable, and it uses an established interface, and that’s all we should care about.

A glimpse at some of the hardware that you don’t have to manage — one of the Google data centers — Pryor, Oklahoma

Now let’s talk about the compute side of things. When looking at the various offerings of the major cloud vendors, you’ll quickly run into references to physical hardware and other implementation details. There are references to the type of CPUs (e.g. Ivy Bridge, Haswell, etc.), number of cores, and CPU clock speeds. Are these the details we need in the future of cloud computing, or do we need a new vocabulary that is more closely related to how we will use the computing power? Most of us struggle as it is to relate actual performance to the concepts of clock speed, multi-core processing, and hyper-threading. For example, which MacBook Pro is faster? — the model with “2.9GHz dual-core Intel Core i5 Turbo Boost up to 3.3GHz“, or the model with “2.2GHz quad-core Intel Core i7 Turbo Boost up to 3.4GHz“? Is the quad core model faster than the dual core for all types of computing tasks? Is the i5 GHz value directly comparable to the i7 GHz value? You have to do a lot of studying to understand the differences, and it gets very technical very fast. For most, these specs mean very little. We simply care how fast the stuff we do will get done. Most of us just take the wine approach and assume that the more expensive one must be better.

The need for better abstraction

During my college years, I briefly pursued a degree in Computer Engineering until I realized that I’m more of a software guy than a hardware guy (whew!). The curriculum had several required electrical engineering courses where I learned that electricity is extremely complicated… more complicated that many of us care to understand! However, most applications of electricity have been abstracted to relatively simple plumbing concepts where we think of the “flowing” of current measured in amps — a relatively simple metaphor that most of us can understand. This simplified abstraction provides an adequate level of understanding for most common applications without requiring an understanding of how electricity actually works at the atomic level.

Cloud computing needs similar abstractions that let us focus on how computing power is applied rather than how it’s implemented. This got me thinking about how we would need to define our computing needs when configuring our future cloud services. When you boil it down to the ones and zeros, we have a series of instructions that need executing with various inputs and outputs. How will we distinguish the need for extremely fast execution of a single stream of instructions and instructions that can be parallelized across multiple CPUs? Will we need to specify that a specialized CPU is needed for certain types of instructions (e.g. GPU), or will that be a detail that is determined by the service automatically? I’m now over my head, but hopefully you get my point. We need to start thinking of cloud computing in different ways using a different vocabulary.

All of this has me pondering a few things:

  • How will our vocabulary change as cloud evolves?
  • Does this future vocabulary include terms like disks, drives, SSDs, and CPUs?
  • How will we describe our storage and compute needs?
  • As network speeds continue to improve, will the physical location of our data and computing matter as long as we get the performance and durability we need? In other words, is location another implementation detail that we will no longer be concerned about?
  • Will the need for hybrid cloud solutions diminish as cloud performance improves and prices continue to drop? Does this become an unneeded implementation detail?

If you look at how things have changed in the past few years in cloud computing and extrapolate how things could look in the near future, it’s super exciting.

Originally published at gregsramblings.com on May 30, 2015.

--

--