It’s pretty similar actually. With spinning disks sequential access is limited by head rate, often around 200MB/sec these days. Random access is limited by seek rate (around 100 second give or take) worst case. Depends how both how far you seek (further seeks take longer) and how much data you reach each time. Randomly reading 1 sector is much worse than random seeking 1000 sectors.
For sequential accesses prefetch generally results in you being limited by bandwidth (25 to 35GB/sec or so on common desktops). However random accesses take 70–100ns to get a response. So common desktops have 2 memory channels (unless you have the 4x channel top of the line intel chips or the quad channel threadripper), so you get two cachelines per 70ns or so. Doing the math that drops you down from 25GB/sec down to 14M accesses. The actual bandwidth you get is cache line (usually 16 bytes or so) which is only 200MB/sec. If you only need a fraction of those 16 bytes it’s even worse.
Prefetch units can be smart enough to detect things like strided accesses (like say accessing a column of a row based array), but if you are doing things like pointer lookups on an amount of data that doesn’t fit in cache then you have to go all the way to main memory. There are other effects related to TLB, pages, and row/columns that are active, but the largest effects are mentioned above. So accessing sequentually vs random can easily be a factor of 100 slower.