Everything you need to know about Binary Search Trees

Definition, operations and how to implement them from scratch.

Zakarie A
CodeX
10 min readAug 22, 2021

--

Binary Search Trees — or BST for short — are a fundamental data structure. They enable to store and organise values that can be ordered. They have plenty of applications and can be used to implement data structures like dynamic sets, dictionaries and priority queues.

This article shows how we can define and implement binary search trees and their operations.

Photo by Matthew Smith on Unsplash

Definition and terminology

Let (X, ≤) be a totally ordered set.

We start by defining inductively the concept of node of a binary search tree using two axioms:

(1) There exists a null node, denoted Null;

(2) a 4-tuple N = (P, x, L, R) such that:

  • x belongs to X,
  • P, L and R are nodes,
  • the binary-search-tree property is satisfied (we’ll come to it a bit later)

is a node. x is called the key of the node, P is called the parent node, L is the left child and R is the right child.

We may also want to store an auxiliary value in each node, but this does not change how we implement operations.

Leaves, trees, depth, height and ancestor

If the left child and the right child are both empty then T is called a leaf. If the parent is empty, it is called a root node.

A binary search tree B can be null or characterised by a root node R. We say that R belongs to B and that a node belongs to B exactly when its parent does. If N is a node, we will simply refer to the tree whose root node is N as N. In the rest of this article, we’ll assume the keys are unique in trees.

The depth of a node in a tree is its distance to the root. Formally, the depth of the root is 0 and if a node (p, v, L, R) has depth x, L and R have depth (x + 1).

We define the height of a tree as the depth of its deepest leaf, i.e. the maximum over the depth of all leaves.

Finally, we say that a node N is an ancestor of a node M if N = M or N is the parent of an ancestor of M. We say that N is a proper ancestor of M if N is an ancestor of M and NM.

This allows to define a type for trees as a discriminated union:

The binary search property

The binary-search-property enables to organise nodes according to the order of their keys. Given a non-null node (P, x, L, R), it states that for all nodes ℓ in the tree L and for all nodes r in the tree R, Key(ℓ) ≤ x and x ≤ Key(r).

For example, the image below shows a binary search tree. Its root has key 4. Its leaves have keys 1, 5 and 11.

But the following binary tree is not a binary search tree:

This is because the right-subtree of the root contains a node with key -1, which is less than the key of the root.

Operations

Binary search trees support the following operations:

  • walking,
  • inserting,
  • searching,
  • searching,
  • min-querying and max-querying,
  • deleting.

Walking

Walking or traversing a tree means visiting all nodes in some order and passing them to some function.

There are three walking algorithms: in-order, pre-order and post-order. In-order traversal visits a node (P, x, L, R) by recursively traversing its left sub-tree L in-order, then printing (or doing whatever you please) the key x and finally recursively traversing R in-order. It is implemented as follows:

  • In-order traversal prints all nodes of a binary search tree in sorted order. It runs in Θ(N) time with respect to the number of nodes in the tree.
  • walk_in_order can be made tail-recursive using a continuation function.
  • Similarly, pre-order and post-order traversals print the current key before traversing the subtrees, and after traversing the subtrees, respectively.

Inserting

Inserting a key v into BST consists in inserting a node with key v in a way that produces a new binary search tree.

The procedure we use to perform this operation creates a new leaf N with key v. To find the parent of N, we visit the tree from the root down to a leaf, choosing the appropriate branch to maintain the binary-search-tree property. When we find a leaf (we’ll call it M), M becomes the parent node of N. If v ≤ Key(M) (or Key(M) < v, respectively), we set the left child (or the right child, respectively) of M to N.

Here is a recursive implementation:

The subroutine findParent calls itself recursively until it lands a null node. This implies that the previous node is a leaf, and not just any leaf: the future parent of the new node.

Once we’ve retrieved the parent, we just have some processing to perform. We create the leaf N and we decide whether it must be the left child or the right child of its parent.

Making the previous function iterative is not much harder:

Inserting runs in Θ(H) time with respect to the height of the tree.

Searching

Binary search trees allow us to search a node given its key in linear time with respect to the height of the tree, as shown in the pseudocode below.

The implementation is quite straightforward. We check if the current node contains the target key (in which case we return true) and call recursively on the left subtree if the target key is smaller than the key of the current node, or on the right subtree if the target key is bigger. If we reach a null node before terminating then the target key does not belong to the BST.

The same idea can be implemented iteratively:

Min and max querying

Finding the minimum (or maximum) key in a tree is similar to searching. All we need to know is walking towards the leftmost (or rightmost) subtree until we reach an empty node.

We implement find-min as follows:

find-max is similar.

Finding the successor of a node

Given a node N = (p, v, L, R) in a tree R, we define the successor of N in R as the node with the smallest key among those whose key is strictly greater than v. Such node doesn’t necessarily exist.

If R is non-null then the successor of N is the minimum of R.

If R is null, N may still have the successor: it is its closest right ancestor, i.e. the closest ancestor whose left subtree contains N.

For example, the successor of 5 in the tree drawn below is 6: 4 is a right ancestor of 5 because 5 is in its right subtree so we move up to 6, which is a left ancestor of 5.

More formally:

If N has a non-null right child then the successor of N is the minimum of the right child of N.

If the right child of N is null then N has the successor if and only if it has a right ancestor (in which case the successor of N is its closest right ancestor).

Proof outline:

If R is non-null, let m be the minimum of R. If N is the left node of its parent, then by the binary-search-tree property, m is less than all ancestors of N, therefore it must be the successor of N. If N is the right node of its parent then every ancestor is less than it, so m is the successor.

Suppose now that R is null. Let m denote the closest right ancestor of N, if it exists. Then m is greater than N, by definition of a right ancestor and by the BST property. If m is in the left subtree of its parent then no ancestors of m are less than it. If m is in the right subtree of its parent then no ancestors of m are greater than N. Since m is the *closest* right ancestor of N, any proper ancestor of N which is not an ancestor of m is a left ancestor of N, therefore less than it. This suffices to prove that m is the successor of N.

Finally, suppose that R is null and N has no right ancestors. All children of R are less than it (by the BST property). All ancestors are left ancestors, therefore less than N as well. This proves that N has no successors.

This allows us to write the following implementation:

It returns NULL if tree has no successors.

The running time is O(H), and Θ(H) in the worst case.

Deleting a node

The last operation we cover is trickier than the previous ones. Given a binary search tree R and a node N, the goal is to produce a new binary search tree R’ with exactly all nodes of R except N.

Repotting

The first thing we do is declare a procedure repot which replaces a node with another.

Lines 2 to 6 ask the parent of next to forget about their child (we’ll give new parents to next in due time).

If previous has no parents (i.e. is the root of tree) then we simply assign next to tree.

Otherwise, the node which is the parent of previous becomes the parent of next (line 12) and we assign next to the appropriate child of previous (lines 13 to 17).

Deleting

If at least one of the two children of N is null then we call repot N C, where C is the left child of N if its right child is null, its right child otherwise.

Consider the following example, where we want to delete node y.

The first condition of delete (line 2) is satisfied. When we call repot y NULL, the condition line 13 is satisfied, so the left child of the parent of y, i.e. x.Right, becomes null. We can then erase y from memory. The updated tree looks like this:

Suppose now that N has two non-empty children. In this case, we replace node N with its successor. For example, suppose the diagram below represents the input tree, and that we want to delete the node with key 17.

The goal is to replace 17 with its successor, 20.

The first step consists in using the function repot to connect the right subtree of 20 to 23. This gives the following temporary tree:

We then connect 17’s right subtree to 20: we set 20.Right <- 17.Right and 20.Right.Parent <- 20.

20 now has the appropriate right subtree. We need to set its left subtree and its parent. To do this, we call repot 17 20 and then connect 17’s left subtree to 20: 20.Left <- 17.Left and 20.Left.Parent <- 20.

Note that if the successor S of the node N we want to delete is its right child then we simply need to repot S into N.

Here is the final implementation:

Conclusion: a word on the height

The runtime complexity of most operations we covered is proportional to the height of the tree. The height of a tree with n nodes is asymptotically between log n and n. If we insert nodes in sorted order, we end up with a 1-ary tree: the height of the node would be Θ(n) so there would be no benefit in using a tree rather than a list. Suppose now that you have a set of n keys and construct a BST by inserting all keys in random order. It happens that the expected height of the tree is optimal: O(log_2 (n)). There also exist self-balancing binary search trees which keeps their height minimal as we insert and delete nodes. Red-black trees are an example of self-balancing binary search trees. They will be the subject of a future article.

Support me!

If you found this article helpful, please consider following me to help me reach the 100 followers threshold required to remain in the Medium Partner Program. This is free, and it really helps.

You can also use my referral link to subscribe to Medium: Become a member. You’ll get access to all the member-only articles on Medium and your membership fee will directly support me.

🐙

--

--