Basic Data Structures and Time Complexity of their Methods

Part 1: Stacks, Queues, Graphs, and Sets

L. Meg Viar
6 min readMar 1, 2017

Data lies at the heart of most programming projects. The methods in which we organize, access, and alter this data define the shape and efficiency of our programs. This series will provide an introductory overview of basic data structures with examples from real life and the web, and an exploration of the time complexity of their fundamental methods. If the concept of time complexity is new to you, check out What Is Big-O Notation? for an overview and An Easy To Use Guide for Big-O Time Complexity for an outline of key terms. Let’s begin!

https://upload.wikimedia.org/wikipedia/commons/2/2a/Dirty_dishes.jpg

Stacks

LIFO — Last in, first out

IRL Example: a stack of dishes

Digital example: the call stack

Common methods: Push (add to the top of the stack) and Pop(remove the item on the top of the stack)

Stacks are a simple structure that allow you to add items to the top (push) and pop them off following the rule “last in, first out.” As its name would suggest, the Javascript call stack is a stack. You may also be familiar with Javascript’s push and pop methods, which allow you to use arrays like a stack. The push and pop methods on any stack have same time complexity as these array methods. Because we are always pushing or popping from the end of the collection, these methods occur in O(1), or constant time.

Think of this in terms of our dirty dish example. To remove a dish from the top of the stack, you need only to lift the top plate. Similarly, to add a plate to the top, only one action is required.

Now, what if you wanted to find your favorite cereal bowl in the stack? You would need to remove the dishes from the stack one by one until you found your bowl. The time complexity of this search would be O(n), linear time, because the number of dish removing actions required depends on n, the total number of dishes in the stack.

upload.wikimedia.org/wikipedia/commons/7/7c/People_waiting_a_train_of_Line_13_to_come_02.JPG

Queues

FIFO — First in, first out

IRL Example: A line of people

Digital Example: iTunes playlist

Common methods: Enqueue (add to the end of the queue), dequeue (remove from the start of the queue)

Queues function like a line of waiting people. New items can be enqueued, added to the “back of the line,” and existing items can be dequeued, removed from the line, in the order in which they were added. Just like the first person in line for the subway will be the first person to board the train, the first item in the queue will be the first to be removed by the dequeue method. Arrays can act like queues, as well. Items can be enqueued to the end of the collection using the push method, and dequeued from the start of the collection using shift.

Just like push and pop in a stack, dequeue and enqueue occur in O(1) constant time. The items begin added are always added to end of the collection (at an index equal to the collection’s length), and those being removed are at the beginning index of 0. No linear searches required. However, finding an item in the queue would be a O(n) linear process, like finding the cereal bowl in our stack of dishes. Imagine you were at a ticket booth for a concert with a line of people in front of in. To find a specific person in the line, you would need to ask each person if their name was the one you’re looking for, let them pass into the concert, then ask the next person, let them in, and so on until you found the person you were looking for. The number of times you would have the execute the ask, pass, repeat sequence is dependent on n, the number of people in line for the concert.

https://upload.wikimedia.org/wikipedia/en/f/fa/Unstructured_peer-to-peer_network_diagram.png

Graphs

Non-hierarchical, any node can connect with any other node(s)

IRL(ish) Example: Network connections across devices

Digital Example: Connections between friends on social networking sites

Common Methods: Add node/edge, remove node/edge, Has Edge, Contains

Graphs represent connections between nodes, with no hierarchical structure or limits on the number of connections. A connection between two nodes is referred to as an “edge.” Some graphs are directional, where nodes may point to others without the pointed-to nodes having a record of the relationship. For this discussion, we’ll stick with undirected graphs. A common real-life example of a graph is the connections between devices. Let’s imagine this in terms of bluetooth connected devices in your home. You may have a bluetooth speaker that has edges (is connected) with your laptop, your phone, and your tablet. You may also have a connection between your phone and your laptop so that you can upload recent photos. Another connection may exist between a bluetooth keyboard that you use for both your desktop and your tablet. There’s no particular order or hierarchy to these connections, and any device may be connected to multiple other devices, which may in turn be connected to other devices. Social networking sites like Facebook use graphs to track connections between friends. Consider how this fits the same pattern as the bluetooth connection example above.

Each node is a unique entity, thus it can be stored and accessed via a Javascript object in O(1) constant time (see part 3 of this series for insight into how constant time lookups on objects are achieved through hashing). Imagine your graph is an object and each node’s value are the keys of that object. The value associated these keys may be a collection of connected nodes with which it shares an edge. If we want to execute a contains method to see if the graph contains an item, we can perform a constant time lookup on our graph object to see if a key equal to the value of the node we’re seeking exists. With a similar approach, we can add and remove nodes in constant time.

To check to see if two nodes share an edge, we will need to use a linear lookup, assuming the collection of connected nodes associated with each node value is stored as an array. If this is the case, we can simply perform a constant time look up to find the node values of the desired nodes and then use a linear array method, such as includes, on their connected node array to see if they contain the other node. We can remove edges in linear time in the same way, accessing the connected nodes array of each node in constant time and then using a linear method to search for and remove the reference to the other node’s value. Adding an edge can be done in constant time, with constant time look ups of the target nodes and a constant time push to other the other node’s value to each node’s connected nodes array. Note that there are several methods for representing graphs, with methods of varying time complexity.

https://upload.wikimedia.org/wikipedia/commons/6/6c/Officers%E2%80%99_Spouses%E2%80%99_Club_hosts_golf_tournament_to_raise_money_for_scholarships_140425-M-XW721-003.jpg

Sets

Order-less collections of items with no repeated values

IRL Example: A bowl of raffle tickets

Digital Example: Any program that can pick a random result from a set collection of items

Common Methods: Add, Remove, Contains

Sets have no order — think of them as a grab-bag of data points. Adding items to a set is a simple constant time operation, as the location of insertion is irrelevant. If the set is stored in an object, removing items or checking to see if the set contains a specific value can also be achieved in constant time with a simple key lookup.

Keep an eye out for Part 2 of this series for an exploration of trees, including binary search trees, and linked lists. In Part 3, we’ll dive into the magical world of hash tables.

--

--