Hot stuff!

An introduction into the issues of heat evacuation in data centers

Whenever talking about data centers, somehow, regardless of the starting point of the conversation, one ends up talking about data center cooling. It is such a large topic that probably half of the thinking that goes into a data center goes into organizing its cooling. With this post, we will sample the topic of heat evacuation in data centers and provide a comprehensive broad-stroke overview of the issue, so that those of us, for whom the subject isn’t obvious, will at least get an idea of why it is important and what is being done about it.

To start off, let’s figure out, which part of a data center generates the most heat.

As everything in the world, data centers consist of parts, which consist of yet smaller parts, and still smaller parts, and so on. To name the major components, a typical data center building consists of:

  • server space (containing servers that are essentially modified, stripped down computers that process and store data that lives on the web)
  • Uninterrupted Power Supply (UPS) — a.k.a. batteries
  • Heating, Ventilation and Air Conditioning (HVAC) systems
  • staff control station and offices
  • back-up generators
  • backbone (the “wiring“ that ties the whole thing together)

If we were to put on a pair of infrared goggles, and start zooming in on the zones that generate the most heat in a data center, we would soon discover that the main heat generators are the servers themselves. If we were to zoom in a bit closer still, inside an individual server chassis, we would see that the part responsible for the most heat would most probably be the central processing unit (CPU) — the part of the server (or a regular computer) that crunches the numbers and directs the flow of bits within the machine, acting as a miniature hyper-complicated calculator (depending on the configuration of the server, other parts of it can generate most relative heat, but CPU is the usual suspect). So, volume-wise, the part that requires most of the cooling is only a fraction of the volume that a typical data center commands.

So, why do CPUs get so hot that we need to artificially cool them?

In thermodynamics, heat is considered to have higher entropy than other forms of energy. As the state of maximum possible entropy is where all systems “desire“ to end up, other forms of energy (electricity, nuclear energy, etc.) tend to morph into heat in the process of doing work. For example, when you use electricity to spin the wheels of a toy car with an electrical motor, the electricity reaching the motor eventually gets split into useful mechanical energy (spinning the wheels) and exhaust thermal energy (heat).

Hence, in a way, you can use the relation of exhaust heat to actual useful work as a measure of efficiency of a system. And as we know, nothing is 100% efficient — microprocessors included. Moving bits around is work that requires energy (electricity) and inevitably results in some heat as a byproduct. Just look at your computer and try to imagine its insides, with tiny wires each exercising certain resistance against the electric current that carries the bits around your machine and billions of microscopic transistors alternating between the states of 0 and 1 at the rate of millions of times per second. If you visualize this, the gentle warmth coming from your laptop’s chassis will start making sense. Now imagine, instead, thousands upon thousands of computers, probably more powerful than yours, crammed into racks filling a warehouse-sized facility and working at full throttle! Can you feel the heat?

The problems with the heat

In a data center, most of the danger that heat possesses is not in the fact that it will “melt“ the servers, blow up the facility, or something dramatic like that, but that it will force the servers to malfunction and shut down or restart. If a computer shutting down is an annoying but still far from fatal problem for a regular computer user, in a data center setting, that kind of behavior might yield catastrophic consequences. Data centers are responsible for hosting mission critical applications of their clients, and need to be available 60/24/7/365 so if a data center goes down, it’s not just their own problem — it’s all of their clients’ problem, too. Malfunctions are just not an option, hence the effect of heat needs to be minimized.

There are a number of ways to evacuate server exhaust heat.

A typical data center today uses air as the cooling agent (just as a regular PC does) and aims to maintain an optimal working temperature within the center between 20–25 degrees Celsius. In order to achieve this, the equipment inside a data center server room is arranged in isles, with half of them called “cold“ isles, and the other half — ”hot“ isles. The idea is that cool air gets fed into cold isles from underneath the raised floor of the facility, penetrates the servers thus cooling them, and then gets sucked out into the hot isles on the other side of the racks. From there it travels to the air handling units (AHU), gets re-conditioned to optimal temperature and humidity and then the cycle repeats itself.

Although common and pretty straight-forward, this approach is also responsible for the dirty name that data centers have gained for themselves in the last few years. The air is cooled by heat exchangers containing chilled water, and running the chillers that condition the water is a very energy intensive undertaking. This results in some data centers using as much energy for cooling servers as they do for actually running them, which cannot be good for the environment.

PUE

Here, a little remark is in order about the so called power utilization effectiveness (PUE). PUE compares the total energy consumption of a data center to the energy consumed by computing alone. So, a PUE of 2 would mean that a data center uses as much energy for performing computations as it does for other functions, biggest of them typically cooling. A figure of 1.2 would be considered good PUE, while 1.05 — superb!

In pursuit of an optimal PUE (which helps save both the environment and money for the operators), we’ve been seeing a whole host of new approaches to cooling data centers emerge in the last few years.

Best contemporary practices in data center cooling

Among some of the noteworthy examples of efforts aimed at reducing data center carbon footprint are Facebook’s latest data centers in Prineville, Oregon, and Lulea, Sweden. These facilities are located in environments cold enough to use ambient outdoor air, augmented by evaporative cooling during warmer seasons, to cool down their servers. Although still dumping exhaust heat into the atmosphere, this approach substantially cuts down carbon emissions and is thus much healthier than the one described above as typical.

On the other hand, Facebook is also in the vanguard of optimizing the hardware of data centers — having founded the Open Compute Project — an organization dedicated to open source server design with the goal of optimizing server performance. Founded just a few years ago, it has by now been joined by a number of major players in the data center ecosystem, such as AT&T and Equinix.

There are also efforts to cool data centers using liquids, but so far this approach has mostly been limited to supercomputers and data centers tailor-designed to very specific applications, such as bitcoin mining, for example. Approaches vary from immersive cooling, to routing water to specific elements inside the server chassis. Although 4,000 times more effective than air, liquid cooling is complicated, plus the very words “liquid” and “electronics” do not mesh very well, resulting in a phenomenon similar to “range anxiety“ (a phrase that is used in relation to clients hesitating to buy electric vehicles): a concern routed rather in fear than fact. In any case, the benefits of liquid cooling are hard to dispute and we are hoping that the industry will warm up to it soon enough.

On the other hand, in contrast to reducing heat exhaust, some companies aim to capitalize on it. Examples include Yandex’s data center in Finland that will provide heating to a nearby village, a water-cooled IBM Aquasar supercomputer heating buildings on ETH’s Zurich campus, and tiny home heating servers by a French company Qarnot or Dutch Nerdalize, among others.

Although a good remedy, heat reuse is not the best solution to the problem of server exhaust heat, as ultimately, the best solution is getting rid of exhaust heat altogether. We believe that the true solution to the problem of exhaust heat in data centers can only come from optimizing the work of the tiny pieces that generate that heat in the first place.

For now, though, we can get creative with that heat and have fun with it!


Did you find this article helpful? We would appreciate if you could press the tiny heart symbol below or share it, if so.

Also, in case you didn’t know, last week Project Rhizome had the honor of being featured on DatacenterKnowledge — a leading online source of daily news and analysis about the data center industry! Check out the article about us here!

And don’t forget to stay in touch with us via our Web-site, Facebook or Twitter!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.