An introduction into the issues of heat evacuation in data centers

Whenever talking about data centers, somehow, regardless of the starting point of the conversation, one ends up talking about data center cooling. It is such a large topic that probably half of the thinking that goes into a data center goes into organizing its cooling. With this post, we will sample the topic of heat evacuation in data centers and provide a comprehensive broad-stroke overview of the issue, so that those of us, for whom the subject isn’t obvious, will at least get an idea of why it is important and what is being done about it.

To start off, let’s figure out, which part of a data center generates the most heat.

  • server space (containing servers that are essentially modified, stripped down computers that process and store data that lives on the web)
  • Uninterrupted Power Supply (UPS) — a.k.a. batteries
  • Heating, Ventilation and Air Conditioning (HVAC) systems
  • staff control station and offices
  • back-up generators
  • backbone (the “wiring“ that ties the whole thing together)

If we were to put on a pair of infrared goggles, and start zooming in on the zones that generate the most heat in a data center, we would soon discover that the main heat generators are the servers themselves. If we were to zoom in a bit closer still, inside an individual server chassis, we would see that the part responsible for the most heat would most probably be the central processing unit (CPU) — the part of the server (or a regular computer) that crunches the numbers and directs the flow of bits within the machine, acting as a miniature hyper-complicated calculator (depending on the configuration of the server, other parts of it can generate most relative heat, but CPU is the usual suspect). So, volume-wise, the part that requires most of the cooling is only a fraction of the volume that a typical data center commands.

So, why do CPUs get so hot that we need to artificially cool them?

Hence, in a way, you can use the relation of exhaust heat to actual useful work as a measure of efficiency of a system. And as we know, nothing is 100% efficient — microprocessors included. Moving bits around is work that requires energy (electricity) and inevitably results in some heat as a byproduct. Just look at your computer and try to imagine its insides, with tiny wires each exercising certain resistance against the electric current that carries the bits around your machine and billions of microscopic transistors alternating between the states of 0 and 1 at the rate of millions of times per second. If you visualize this, the gentle warmth coming from your laptop’s chassis will start making sense. Now imagine, instead, thousands upon thousands of computers, probably more powerful than yours, crammed into racks filling a warehouse-sized facility and working at full throttle! Can you feel the heat?

The problems with the heat

There are a number of ways to evacuate server exhaust heat.

Although common and pretty straight-forward, this approach is also responsible for the dirty name that data centers have gained for themselves in the last few years. The air is cooled by heat exchangers containing chilled water, and running the chillers that condition the water is a very energy intensive undertaking. This results in some data centers using as much energy for cooling servers as they do for actually running them, which cannot be good for the environment.

PUE

In pursuit of an optimal PUE (which helps save both the environment and money for the operators), we’ve been seeing a whole host of new approaches to cooling data centers emerge in the last few years.

Best contemporary practices in data center cooling

On the other hand, Facebook is also in the vanguard of optimizing the hardware of data centers — having founded the Open Compute Project — an organization dedicated to open source server design with the goal of optimizing server performance. Founded just a few years ago, it has by now been joined by a number of major players in the data center ecosystem, such as AT&T and Equinix.

There are also efforts to cool data centers using liquids, but so far this approach has mostly been limited to supercomputers and data centers tailor-designed to very specific applications, such as bitcoin mining, for example. Approaches vary from immersive cooling, to routing water to specific elements inside the server chassis. Although 4,000 times more effective than air, liquid cooling is complicated, plus the very words “liquid” and “electronics” do not mesh very well, resulting in a phenomenon similar to “range anxiety“ (a phrase that is used in relation to clients hesitating to buy electric vehicles): a concern routed rather in fear than fact. In any case, the benefits of liquid cooling are hard to dispute and we are hoping that the industry will warm up to it soon enough.

On the other hand, in contrast to reducing heat exhaust, some companies aim to capitalize on it. Examples include Yandex’s data center in Finland that will provide heating to a nearby village, a water-cooled IBM Aquasar supercomputer heating buildings on ETH’s Zurich campus, and tiny home heating servers by a French company Qarnot or Dutch Nerdalize, among others.

Although a good remedy, heat reuse is not the best solution to the problem of server exhaust heat, as ultimately, the best solution is getting rid of exhaust heat altogether. We believe that the true solution to the problem of exhaust heat in data centers can only come from optimizing the work of the tiny pieces that generate that heat in the first place.

For now, though, we can get creative with that heat and have fun with it!

Did you find this article helpful? We would appreciate if you could press the tiny heart symbol below or share it, if so.

Also, in case you didn’t know, last week Project Rhizome had the honor of being featured on DatacenterKnowledge — a leading online source of daily news and analysis about the data center industry! Check out the article about us here!

And don’t forget to stay in touch with us via our Web-site, Facebook or Twitter!

Project Rhizome is a design start-up that aims to combine the development of cutting edge data infrastructure with the creation of attractive urban environments

Project Rhizome is a design start-up that aims to combine the development of cutting edge data infrastructure with the creation of attractive urban environments