The Slowest Mother of All Raids!

Do you know how many drives can be in a linux-raid device? It’s 1920!

George Shuklin
OpsOps
2 min readJan 28, 2021

--

I found that linux-raid (the one many calls mdadm by the name of management utility), in fact, is a rather slow for fast devices. The problem with slowness is not constant (which can be tolerated, but rather progressive — the more devices you have in your raid, the slower it is)/

I would be happy to say it’s a beast. But no, it’s a flaccid disaster.

This thing on the picture is a crazy RAID0 made out of 1920 RAM disks (it’s a maximum number of devices linux-raid can support, as I found).

Block RAM disk in linux is maid by brdmodule, and it’s moderately fast. Depending on your mem and CPU you can get about 190–250k IOPS from it.

But if you put 1920 of them in a single raid0, it yields the whopping 1500 IOPS. Yep. Without ‘k’. (Or, 1.5k IOPS, if you prefer ‘kIOPS’ notation).

I found it while investigating why NVME are slow slow in raid. Each DC-grade NVME is capable to yield results on pair with brd (even better, actually), but 10 of them just …suck.

Turns out, it has nothing to do with NVME and totally Linux-raid issue.

I’ve reported it to mail-list, and is curious for answers.

Meanwhile, if you would like to make this monstrosity by yourself, enjoy:

Ajust NUM and rd_size (in kB) at taste (and available memory).

It’s terrible, just terrible.

UPD: Turns out, LVM is less susceptible to thrashing, and I can get a decent 300k IOPS from 1024xRAM LV.

--

--

George Shuklin
OpsOps

I work at Servers.com, most of my stories are about Ansible, Ceph, Python, Openstack and Linux. My hobby is Rust.