Leexgx
3 min readAug 11, 2020

--

Raid 5 and 6 performance isn’t much difference in performance as cpus are so fast it’s the individual disk that’s the limiting factor (very slightly lower write performance but nothing you would really notice)

The BTRFS data integrity only works correctly if the underlying system has redundancy (so if you lose a disk and you get a data error with raid 5 it can’t do anything about it as it can’t repair it, the underlying features of raid 5 and 6 or SHR 1 and 2 assures that data integrity any way so not really needed to have data integrity enabled on BTRFS as it quite cpu heavy on a unit that uses an ATOM like cpu the data integrity option if enabled will affect file transfers far more then using raid 6/SHR2)

to goto SHR 2 only requires that you add another disk and then upgrade the currant SHR1 array to SHR 2 (rebuild time on SHR1 or SHR2 10TB hdd probably over 2 days as its per disk not because its 2 parity disks)

Main issue with raid 5/SHR1 is that if a disk fails or you replace a disk all your data is now vulnerable to a second error (data error or second disk fail) a second that might fail which would destroy the array, with SHR2 any secondary errors won’t normally stop re rebuild process it just fix them silently (unless a second disk flat out fails then it may ask you to do somthing but the array would still be up)

unsure how Synology handles data errors when rebuilding with no redundancy left in raid 5/SHR1 (might just crash the array or might keep it up but you be unable to rebuild it with a new disks so you can recover data)

as my dell raid controllers will do a thing called hole punch when you lost redundancy, so the array can stay up but its no longer redundant with missing data but it keeps the array up so you can get the data off (if you lose another disk then yes the array will fail) then to fix it you have to destroy the array rotate the disks by one bay and then recate the raid array

Raid 10 is the fastest for high speed IOPS but all configurations use 50% space as its raid1 + 0 (generally you only use it with 4 disks or if you need a high speed array for high IOPS) but raid10 does not have data check sum data integrity (it just compares what’s on both sides of the raid 10) with raid6 you only ever lose 2 disks of space (more ideal with 5-6 disk or gives you the option at a later date to just add more disks 1 at a time and expand the array)

With 10TB hdds far higher chance of a double fault happening as rebuild times are very long so should not use anything lower then RAID6/SHR2 (longer the more data is stored, as it can rebuild an empty array faster then a filled one that has actual data not just zero’s) over 2tb or lots of disks raid6 is what should be used (a lot of people just use shr1 or raid5 because they think its going to be slow and that raid5 will protect them still)

the probability of losing 3 disks is very unlikely in raid 6 and even if there is a data error when rebuilding from 1 disk the raid can sort that out in the background (my raid6 array did that just thing some days ago it didn’t fail the disk but it corrected 6 data errors on a seagate disk and continued on, its consumer hdd it is been used for testing only)

with raid 5 you have zero redundancy when replacing a disk so a data error happens it can’t use party to correct that issue in the background and array can just fail and has to be rebuilt to fix the problem

--

--