The myth of O(1) random access

EventHelix
Aug 28, 2017 · 2 min read

In this series of articles, Emil Ernerfeldt argues that with large data structures, there is no such thing as an O(1) random access. The impact of processor caches results in approximately O(√N) performance.

In the next article, Emil provides a theoretical argument on why we see O(√N). He uses the example of a library to illustrate his point:

In the next article, Emil concludes that memory access should be predictable, this helps the processors prefetch the memory into the cache before the program accesses the memory. By this reasoning, random access should be avoided, as it leads to poor cache performance.

In this article, Emil clarifies some concepts. He has also added a link to the code used for benchmarking.

Software Design

Architecture | Design | Coding

)

EventHelix

Written by

@EventHelix — 5G | LTE | Networking

Software Design

Architecture | Design | Coding

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade