Published in

CodeX

# Sorting

The two most common good ways to sort an array are Quick Sort and Merge Sort. The others — Bubble Sort, Insertion Sort, Radix Sort, etc. — are less efficient in general or only work with specific assumptions.

• Merge Sort operates by sorting the left and right half of the array and merging the arrays. How does it sort the left and right halves? Through the Merge Sort algorithm (recursively). It takes the left half, divides that in half, sorts each part, and then merges those. It then does the same on the right half. Merge Sort is O(n log(n)) in the average and worst cases.
• Quick Sort sorts data by choosing a random “pivot” element and rearranging the elements in the array based on whether they’re less than or greater than the pivot. Next, it tackles the elements on the left side of the pivot (all of which are less than or equal to the pivot) and the right half of the pivot (all of which are greater than the pivot). It applies the same strategy to each side: pick a pivot, rearrange, and then pick a new pivot on each side. Quick Sort is O(n log(n)) in the average case, but O(n2) in the worst case. The worst-case will happen if a bad “pivot” (a very low or very high element) is continuously picked.
• Insertion Sort maintains a sorted sublist of elements (initially 0) at the beginning of the array. It then looks at the beginning of the unsorted sublist. If this element is bigger than the last element in the sorted sublist, it leaves it in place and just grows the sorted portion (since the element is already in the correct order). If it’s smaller, it moves it into place in the sorted sublist. The unsorted portion shrinks by one each time. The algorithm then repeats this step for each element in the unsorted portion until the array is fully sorted. Insertion Sort takes O(N) time in the best case (if the array is already sorted), but O(N2) in the expected and worst case.
• Bubble Sort is a pretty straightforward algorithm. It iterates through the list repeatedly, swapping each pair of elements out of order. Once a full iteration happens without any swaps, the array is sorted. This takes O(N) in the best case (if the array is sorted) and O(N2) in the expected and worst case.

## Problem 1 — Given two sorted arrays, write a function to merge them in sorted order into a new array.

The most efficient way to tackle this is to use the fact that the arrays are sorted. We can merge them by taking successive elements repeatedly until we reach the end of both arrays. We maintain pointers to where we are in each array just easily to move onto the next array.

## Problem 2 — Implement insertion sort.

Insertion sort operates by iterating through the array and inserting each element into the elements to its left.

# Binary Search

Binary search is an algorithm for locating a value in a sorted list (typically an array). In binary search, we compare the value to the midpoint of our list. Since our list is sorted, we can determine whether the value should be located on this comparison element’s left or right sides. We then search the left or right side, repeating this operation: compare to the midpoint of the sublist, and then search the left or right half of that sublist.

## Problem 3— Implement binary search. Given a sorted array of integers and a value, find the location of that value.

Binary search works by repeatedly “halving” the array into subarrays. In the first iteration, we compare the value x to the midpoint and learn whether x will be left or right. Then, we repeat this step with this new subarray: is x found on the left half of the new subarray or the right?

## Problem 4— You are given an integer array sorted but then rotated. It contains all distinct elements. Find the minimum value. For example, the array might be 6, 8, 9, 11, 15, 20, 3, 4, 5. The minimum value would be 3.

A brute force solution would be to just iterate through the array and look for the minimum value. We can guess that this isn’t what the interviewer is looking for, though, since it doesn’t use the sorting information.

# Graph Search

There are two common algorithms for searching a graph: depth-first search and breadth-first search.

1. Depth First Search

## Problem 5— Using depth-first search, check if a tree contains a value.

Depth-first search works by checking if a value v equals the current node’s value. If it is not, you search each child of the node, one by one.

## Problem 6— Write the pseudocode for breadth-first search on a binary tree.

We want to search a node level by level to perform a breadth-first search. We want to search each node’s children before we search any of their children.

--

--

## Crack FAANG

793 Followers

Understand the technical details behind all your favorite products. We help you put your best foot forward so you can get through the FAANG door.