Open Compute Project

General introduction for IO transceiver and connectors roles in the industry from server to switch thermal considerations

wen tsen liao
Wen’s writing blog
5 min readAug 21, 2022

--

After general introductions of the IO applications on the internet ( What (Thermal) is in High-Speed IO? Part 3 ), let’s get a closer look into IO cooling details.

Home » Open Compute Project is a foundation organized by companies across ranges.

OCP projects and subgroups

There are two main IO applications in the Open Compute Project Foundation (OCP)

1. Server:
A network interface card from Intel /Nvidia(Mellanox) is usually installed on the rear side of the server/storage. Heatsink design is tough to keep low height while sufficient dissipation heat in warm air after CPU/Memory although less IO dense than switch.

2. Switch:
Requiring high bandwidth to transceive data on the internet and long distance, front bezel design is critical in cooling high dense IO while bringing in enough total airflow cooling ASICs from Broadcom /Marvell (Innovium/ Inphi).

Server IO application at rear Network Interface Card | Switch IO applications in front

Server/Mezz — OpenCompute

As the foundation is organized by companies across ranges, yearly submission is the best marketing place to let companies demonstrate their latest product designs. Within several subgroups, IO components focus more on server Network Interface Card contributions showing how we extend the thermal limitation in lower but denser card format.

NIC in lower and denser design target | NIC rendering image

Although the current spec has defined 3 form factors with 2 different card heights, the growing heat dissipation need is pushing the thermal design and card height limitations. Such as more heat dissipation requirements for AEC application in rack level reach distance…… etc

1. Existing OCP spec: for lower than QSFP
SFF 11.5mm
TSFF 14.2mm

2. In discussion: for higher than QSFP
H15.1: QSFP available
H17.8: OSFP RHS available, can accommodate 2 cards in 1U
H20.1: OSFP available

NIC height definition of existing form factor | New higher NIC proposals

The reason for the height difference is to accommodate IO in different form factors which not only opens the compatibility but also cooling design ease on larger OSFP than dense QSFP/QSFPDD.

The overall comparison and considerations in NIC card height definition and IO cooling are like below from the original 3 form factors to the new proposal for 1U envelope accommodation

Detailed illustrations for NIC form factors

To evaluate thermal designs, the correlated thermal models allow the simulation analysis to be able to show the required flow rate for 2different flow intake conditions

1. Cold aisle:
from free ambient in the data center

2. Hot aisle:
from heated air exhaust after ASICs

To evaluate the NIC card there are two primary components for thermal monitoring

1. IO connectors: Typical 3.5~12W

2. ASIC: Larger power better NIC card cooling capacity

In spec, the 3.5W IO connector is considered and there is no thermal issue, while the ASIC TDP is the critical outcome for evaluation. There are typical evaluations

1. Maximum TDP for different fan flow rates in different inlet temperature

2. Maximum TDP for different cold/hot aisle and inlet temperature

Cold/Hot aisle inlet / Maximum TDP simulation analysis

In general, the hot aisle is much more difficult in thermal design not only for the heated air inlet but also because the rear-to-front flow distribution makes the air spread out but not concentrated in IO. A typical IO heatsink design looks like below with extra side&rear fins to “guide” air.

In this article, we introduced how high-speed IO is represented in roles in the industry

Sever/storage:
NIC requires lower card height to accommodate more card slots in the system, but the thermal challenge is to compare IO design with the feasible working condition by hot air from the CPU memory hot aisle upstreams.

Switch:
The front panel is giving large thermal dissipation capability to cool high-power transceivers for long reach distances. The challenge will be pulling a large amount of heat from a pluggable transceiver surface.

Considering the domination in the market, Switch applications are taking the major market share. IO connector companies aim to develop different products to support higher and higher thermal channeling in ganged/stacked cages.

Among thermal developments, the stacked cage on the single side is the most secret design, as it’s hidden within the stacked cage unlike ganged cage explict showing everything. The bottom port in the stacked cage is the secret recipe developing in each own house while the ganged cage in the belly-to-belly or NIC card has a different thermal challenge in lower height limitation.

Stacked cage with inner heatsink (top) | Ganged cage with lower height heatsink (bottom)

As above pictures, we can notice that the heatsink within a stacked cage not only has flow limitation from the connector and EMI cage opening but also the extra air impedance lets air flow bypass the front bezel for a larger flow rate into system cooling. So the bottom port thermal issue is getting more and more addressed in applications nowadays.

Next, after a general introduction in OCP to understand the high-speed IO application in the industry, we will look into IO MSA ( QSFP-DD /QSFP MSA. MSA spec breakdowns from components ) details to understand the key information within components.

--

--