Data Structure 101: Part 4

João Marcus Fernandes
ArcTouch
Published in
5 min readDec 6, 2021

--

This is the 4th post in a series about Data Structures.

In Part 3 we learned about Lists, and that there are several ways to create a List of elements, and to manage how data is added or removed.

In this post, we are going to talk about trees, a special data structure. (You’re actually using a tree structure right now).

Trees are a non-linear data structure. Unlike the List which allocates a defined slot in memory, a Tree allocates along the way. The main element of a tree is a Node. Each node stores data as well as references to its next nodes (the children). The first node of a tree is called the Root, and the least elements are refered to as a Leaf.

Graphical example of a Tree

Usually in Data Structures, we represent tree upside-down, so the root is above everything. In the image above, A is the Root, B/C/D/F are Parents, and E/G/H/I/J are Leafs. B is Child of A. D is child of B, and so on.

This is a great Data Structure because you can create a relationship between nodes. For example, if we create a tree of integers, each node can have two children. To insert a new element, we follow this rule: if the value of the element is higher than the actual element, we insert it on the right side of the tree. If the value is lesser it will go to the left side of the tree.

For example, our first element is 6.

We want to add 5. Since 5 is less than 6, we add it on the left side.

Now, we want to add 8. Since 8 is greater than 6, we add it on the right side.

If we need to add 7, we check the root node 6 first.

It’s already full, so we move to the right side element which is 8. Since 7 is less than 8, it will be added on the left side of 8. Like this:

When inserting data on a tree, we need to navigate each node using a type of algorithm called Recursion. I explained recursive algorithms in this post, but will summarize it explain it briefly again here.

Recursion is a method of solving a huge problem by splitting the solution into a small function and repeatedly calling itself through a list/tree until you reach the the last element. For example:

This is a Binary Search tree that uses the Add method (which I've explained before) and added the Search method and the Height method. Height is the number of floors that a tree have. This tree has a height of 4. All of these methods use recursion to call itself when the actual element isn’t the last one.

Binary search trees are optimized for search functions, since we only need to search on the side that we need.

Visual representation of a Search in Binary Search Tree

I've promised at the beginning of this post that I would explain how you are using trees right now. Well, you are using a web-browser or app to read this post. In either case, the way the elements are rendered on the screen is by applying a tree of elements. For example, the HTML structre:

In a basic HTML visualization whenever you use something like document.querySelector("div > p") you are doing a search on the tree, but JavaScript creates an abstraction of that.

Cool, right? You know what would be even cooler?

If we built some Python code to be like HTML!

So trees are simple and beautiful, right? But trees can have problems if their Height is too great. On our first example of trees, you can see that if our first element is 1, almost all positive values are greater than one, so the tree will be huge on the right side, that is what we will call an unbalanced tree.

Unbalanced trees grow whenever we build a tree without a management algorithm. As the tree grows higher its gets harder to search it.

Remember FILO and FIFO of the last post? Those are management algorithms for List. Here’s a management algorithm for Balanced Trees.

Self-balancing binary search tree (SBBST)

SBBST is hard to say, but simple to understand. This tree uses a balancing algorithm after the addition to try to make the tree smaller.

See this example:

The nodes 31 > 33 > 34 only have one child each. We can fix that by applying the AVL Tree algorithm,which rotates nodes either to left or right. In this case we can rotate 31 with 33, so 31 would be on the left side of 33, and 34 on the right side of 33. This is called left rotation and results in this:

Now you have a more balanced tree. AVL Trees is just one of the types of SBBST that exists. Here’s some more information about how they work.

Conclusion

On the next post, I introduce sorting algorithms as the final topic for this Data Structures series. Want to learn more? Please leave a comment.

Here at ArcTouch we have a great team of software developers. If you’d like to join us, check out our career opportunities.

--

--