The RAM Model of Computation and Big O Notation

Steven Deutsch
3 min readMar 28, 2018

--

This post is a summary of my notes from the Algorithm Design Manual in section 2.1 The RAM Model of Computation and 2.1 Big O Notation. I had a difficult time grasping these concepts so please correct me if I’m wrong. I am in no way an expert on this topic.

The RAM Model of Computation

The RAM (Random Access Machine) model of computation measures the run time of an algorithm by summing up the number of steps needed to execute the algorithm on a set of data. The RAM model operates by the following principles:

• Basic logical or arithmetic operations (+, *, =, if, call) are considered to be simple operations that take one time step.

• Loops and subroutines are complex operations composed of multiple time steps.

• All memory access takes exactly one time step.

This model encapsulates the core functionality of computers but does not mimic them completely. For example, an addition operation and a multiplication operation are both worth a single time step, however, in reality it will take a machine more operations to compute a product versus a sum.

The reason the RAM model makes these assumptions is because doing so allows a balance between simplicity and completely imitating underlying machine, resulting in a tool that is useful in practice.

The exact analysis of algorithms is a difficult task. It is the nature of algorithm analysis to be both machine and language independent. For example, if your computer becomes twice as fast after a recent update, the complexity of your algorithm still remains the same.

Complexity Bounding Functions

There are 3 bounding functions to consider when talking about the complexity of an algorithm, the upper bound, the lower bound, and a tight bound:

  • Upper bound: f(n) = O(g(n))
  • Lower bound: f(n) = Ω(g(n))
  • Tight bound: f(n) = ϴ(g(n))

These really confused me at first so without getting too caught up in the notation let’s look at the definitions:

The upper bound, f(n) = ϴ(g(n)), means that for some constant c and value n0, where positive, the product of the two will always lie on or above f(n).

The lower bound, f(n) = Ω(g(n)), means that for some constant c and value n0, where positive, the product of the two will always lie on or below f(n).

The tight bound, f(n) = ϴ(g(n)), means that for two constants exist for the value n0, where positive, that represent an upper bound and lower bound.

This means for an algorithm to be tightly bounded, f(n) = O(g(n)) and f(n) = Ω(g(n)) must both be true.

A part of these definitions that may be confusing is the value of n0. This represents a certain threshold of n to which our model applies past this point. It’s okay if the definition does not hold true before a relatively small input. We are only concerned with the analysis of large data sets.

Let’s look at applying the upper bound to an example:

3n³-100 = O(n³)

Is the statement above true? Does a constant exist such that it multiplied by the right side of the equation will always be greater than or equal to the left side? Rather, what value times n³ is always greater than 3n³-100?

In this case, that value would be any value greater than 3. So this statement holds true and 3 is our threshold.

Worst Case Scenario

In algorithm analysis, we generally tend to focus on the worst case scenario. This is why we always hear about the notorious “Big O” notation, which is the upper bound of an algorithm.

Why focus on the worst case?

Exact analysis is hard so it is easier to think of things in the extremes. The worst case complexity is often the most easily calculated and the most informative. Calculating average complexity is difficult because you are required to know all situations you will encounter. Calculating the best complexity can be difficult as it requires you to define “best” and not to mention is probably unlikely in practice.

For more information I would pick up a copy of the Algorithm Design Manual and check out the lecture on this chapter here:

--

--