The myth of O(1) random access

EventHelix
Software Design
Published in
2 min readAug 28, 2017

In this series of articles, Emil Ernerfeldt argues that with large data structures, there is no such thing as an O(1) random access. The impact of processor caches results in approximately O(√N) performance.

In the next article, Emil provides a theoretical argument on why we see O(√N). He uses the example of a library to illustrate his point:

In general, the amount of books N that fits in a library is proportional to the square of the radius r of the library, and we write N∝ r². And since the time T it takes to retrieve a book is proportional to r we can write N∝ T² or T∝√N or T=O(√N)

In the next article, Emil concludes that memory access should be predictable, this helps the processors prefetch the memory into the cache before the program accesses the memory. By this reasoning, random access should be avoided, as it leads to poor cache performance.

In this article, Emil clarifies some concepts. He has also added a link to the code used for benchmarking.

--

--