Common Misconceptions in Data Structures and Algorithms

Listing some common misconceptions beginners might face with data structures and algorithms

Skilled Coder
Javarevisited
4 min readFeb 1, 2024

--

Misconceptions about data structures and algorithms are common, especially among those just beginning to learn these concepts.

You might be familiar with some of these, but let’s jog your memory again.

Here are some basic misconceptions, along with explanations

An algorithm that works faster on one type of data will work faster on all types. Performance can vary based on data patterns.

Quicksort is usually fast, but for already sorted data (without certain precautions like randomizing pivots), it can degrade to O(n²) performance.

Recursion is always slower: Recursion can sometimes be slower due to the overhead of function calls, but it’s not universally true.

Merge Sort uses recursion and is often more efficient than many iterative sorting algorithms for larger datasets.

Big O notation always tells you which algorithm is faster: Big O describes asymptotic behavior, not actual speed. Sometimes an O(n²) algorithm can be faster than an O(n log n) one for small inputs.

For very small lists, Bubble Sort might be faster than Merge Sort due to its simplicity, even though its worst-case complexity is worse.

Hash Tables have O(1) complexity for all operations. While the average case for search, insert, and delete is O(1), in the worst case (when there are collisions), these operations can degrade to O(n).

If a hash table uses chaining to resolve collisions and all keys have the same hash, operations can become linear.

All binary trees are binary search trees. While all binary search trees are binary trees, the reverse isn’t true.

A binary tree where nodes don’t follow the left-child < parent < right-child property isn’t a binary search tree.

Stacks and Queues are distinct data structures. While they serve different purposes (LIFO vs. FIFO), one can be implemented using the other as a base.

Example: Two stacks can be used to implement a queue.

Parallelizing an algorithm will linearly decrease its runtime. Throwing more processors at a problem won’t always lead to a proportional speedup due to communication overhead and potential bottlenecks.

  • Example: Amdahl’s law describes the limits of speedup due to parallelization, especially if parts of the algorithm can’t be parallelized.

Divide and conquer always means dividing in two: While many divide and conquer algorithms split the problem in half (like Merge Sort), the essence is about breaking the problem into smaller instances, not necessarily two.

  • Example: The Multiway Merge Sort algorithm divides data into more than two subsets.

All algorithms for a problem are equally good. Some beginners believe that if two algorithms solve the same problem, they’re equally efficient.

Bubble sort and Quicksort both sort lists, but in most cases, Quicksort is significantly faster.

Depth-First Search (DFS) and Breadth-First Search (BFS) can always be used interchangeably: While both are graph traversal methods, they have distinct use cases.

  • Example: In a maze-solving problem, BFS can find the shortest path to the exit, whereas DFS might traverse a longer path.

Dynamic programming is just about memorization. While memorization is a component, the core of dynamic programming is breaking down problems into subproblems and solving each optimally.

  • Example: The Fibonacci sequence can be computed using recursion. With memorization (top-down dynamic programming), you store the results of previous computations. But a bottom-up dynamic programming approach builds the solution iteratively without recursion.

Algorithms with the same Big O notation have the same performance: Big O describes an upper bound, not exact performance. Constants and lower-order terms, which Big O ignores, can play a role.

  • Example: Two algorithms might both have O(n²) complexity, but one might consistently perform better in practice due to smaller constant factors.

‘In-place’ algorithm means no extra space is used. In-place means the algorithm doesn’t use extra space proportional to the input size. However, a small, fixed amount of extra space can be used.

  • Example: QuickSort is in-place but uses O(log n) space for recursion.

Data always needs to be sorted for binary search. While traditional binary search operates on sorted data, variations can work on patterns in unsorted data.

  • Example: In a ‘bitonic’ sequence (increasing then decreasing), a modified binary search can still find an element.

Heuristic algorithms guarantee a solution: Heuristics are often problem-specific strategies that give good solutions but don’t guarantee optimality or even a solution.

  • Example: In the traveling salesman problem, the nearest neighbor heuristic doesn’t always produce the shortest possible route.

Trees always have to be balanced: Not all trees are balanced. Trees can become skewed or degenerate into linked lists.

  • Example: Inserting elements in ascending order into a binary search tree can result in a right-skewed tree.

On my Twitter and Instagram accounts, I frequently share my programming journey and development experiences.

Follow and subscribe to emails for unique coding articles.

Keep exploring, keep learning, and keep coding!

Thanks for reading :)

--

--