VK Data Center
Everyone knows that the heart of VK is the Singer House on Nevsky Prospect. Now you will learn where the brain is located and how it works — that is, the ICVA data center.
How is the data center built?
The data center, a facility for storage and data processing, is a collection of several infrastructure systems that provide reliability and fault tolerance for server work and network equipment.
It is impossible to just install a bunch of servers and switches. But rather one must create and maintain optimal conditions for work. To build a personal data center, the following is required:
- Power supply system: This much is clear. Servers use a ton of electricity and there are many of them. So the usual 220V socket will most likely not be enough.
- Cooling system: Even a personal computer gaming graphics card requires a powerful cooler so what can be said about hundreds and thousands of high performance devices.
- Structured cabling system (SCS): Something must connect all elements together. Numerous cables and a passionate love of the process are needed.
These are the basic “life support” systems, the essential minimum needed to just launch the equipment. But for a real high-end data center, more is necessary:
- Fire extinguishing system: it is important to ensure that the accidental spark does not transform our new data center into ruins.
- Monitoring system: prompt notifications should be made if something goes wrong.
- Access control system (AC): The data center should not be accessible to everyone.
- Security alarm: in the event someone decides to use a crow bar instead of the proper ID.
- Video surveillance system.
We believe that all the above comprise an excellent data center. In the meantime, this is what the VK data center is.
Welcome to the ICVA
What the ICVA? What does it mean? The ICVA is a research center of high-voltage instrumentation, earlier used to be located in the data center building and worked for the benefit of the energy industry. In honor of its legacy, we use an anti-utopian hangar style with ceilings on the level of the fifth floor and mysterious rooms with meter thick walls.
In the four server rooms, there are 640 racks, which totals more than 20000 servers and 200 switches, routers and DWDM-systems with a capacity larger than 4 Tb/s. Also located here is an ASR9000 router with the serial number 1, which designates it as the first commercial installation of such a device all over the world.
At its peak, the data center generates more than 1 TB/s of external traffic. More than 10 of the largest international providers and traffic exchange centers, as well as 40 large operators inside the Russian Federation, are connected to our DWDM-systems.
All elements of the power supply system are reserved for at least N+1.
Literally in front of the data center’s building is the substation “Vostochnaya”, which supplies power to the data center through to 6kV inputs. By using the distribution and automatic reserve input, power is supplied via two independent inputs. Here’s how the blueprint looks (for simplicity’s sake, only one of the four server rooms are sketched):
Each node is duplicated and operates normally at half-capacity. In the event of an accident, power will still reach the server room by bypassing the flaw. For example, if we lost one 6kV input:
If everything is completely bad, then in this case the backup network will provide an uninterrupted power supply. The task of this network is to temporarily provide power to the server rooms as diesel generators are primed and started.
Diesel generator sets support data center functions during protracted accidents or scheduled maintenance operations on the power supply system. In addition to the fuel tanks, a high volume automatic container filling station was installed. The fuel from the tank is passed automatically to all diesel generators, while the reserve is calculated for approximately 24 hours. If necessary, a gas tanker with diesel fuel can arrive within two hours.
Each server and each switch is connected to two power inputs. As a rule, manufacturers provide such an option for modern equipment. For servers with just one power input, the power is doubled with the help of this device:
For the equipment to efficiently work, a certain temperature range in the server rooms must be maintained. That’s why companies around the world are increasingly building their data center somewhere near the polar circle. In such conditions, the outside air can be used to cool servers. This is called “free cooling”, and this approach is correctly considered to be the most energy efficient (why was energy to cool warm air if you can simply use cold?)
We also use free cooling to some extent, but with some reservations. Despite the legendary coolness of St. Petersburg, in the summer the air temperature can sometimes rise above the ideal 20–25 degrees, thus necessitating additional cooling. In the winter, on the contrary, the air is too cold to be immediately used. Moreover, while servers can be overcooled, a change in temperature leads to a shift in the dew point, thereby causing condensation to occur. And finally, the air is brought in off the streets, which means it must be filtered.
The free cooling is used in one server room, whereas the other four use a cooling system similar to the classical design of using precision air conditioners.
Cold air from the mixing chamber or air conditioner is fed into the so-called “cold aisle” via a raised floor or duct. This corridor is an isolated space between two rows of racks, thus resembling something like this:
On the opposite side, the expelled hot air enters the “hot aisle”, which from there it is cooled with freon using indoor air conditioning units, thus achieving the circulation of clean, dust-free air inside the server room.
Structured cabling system
Kilometers of carefully laid wires. Nothing more to say.
Fire extinguishing system
The VK data center uses a gas fire extinguishing system. The gas (chladone) is stored in pressurized cylinders. In the event of a fire, a signal is sent from the server room sensor to activate the valve and send the gas through pipes straight to the source of the fire.
All data center status indicators are tracked in real time. Status indicators include the temperature (according to equipment and room sensors), power supply, load on the network equipment, all data from which is displayed for attendants to view and to be controlled automatically. If something goes wrong, the monitoring system itself will send engineers a message about the problem (via VK and text message).
Access control system and security
Only employees can enter the territory, and all doors are equipped with an electronic lock and access card reader. The ICVA is protected 24/7 and video surveillance is in each room.
Sum up the results
The ICVA has a very good location: just a few kilometers from VK’s home city and near a reliable source of electricity.
There is an ongoing process of upgrading equipment and increasing energy efficiency. PUE (Power Usage Effectiveness), or the energy efficiency factor, is a key indicator of a data center’s assessment. It is calculated as the ratio of all energy consumed by the data center to the actual consumption of servers and network equipment. As is clear from this definition, the ideal PUE of a data center in a vacuum is 1.0. ICVA is not an ideal data center in a vacuum, but we are systematically working to reduce this indicator.
The team of data center staff and network engineers of VK does everything it can to ensure users each day can enjoy their favorite videos, view new photos of friends and not have to think about the complicated structure behind the scenes.
Any questions or concerns to be addressed to the author can be made in the official community of our technical blog.