iOS Interview Prep Guide: 30-Day Code Challenge in Swift — Week 2/5
The Swift 30-day code challenge is intended for developers who are preparing upcoming iOS interviews or for those who are looking to advance their Swift programming skills. Just FYI, if you haven’t done the first-week code challenges, you can find it here. Also, proceed to Week 3 if you want.
For developers who want to work on their own, feel free to utilize my Github repo, which includes a solution and unit tests for each coding question.
Let’s get started. In this article, we will discuss another 7 programming questions from the second-week of 30-day code challenge.
- Middle of the Linked List (Solution)
- Backspace String Compare (Solution)
- Min Stack (Solution) 🥉
- Diameter of Binary Tree (Solution) 🥈
- Last Stone Weight (Solution)
- Contiguous Array (Solution) 🥇
- Perform String Shifts (Solution)
(🥇 🥈 🥉 indicate writer’s selection of high-frequency interview questions.)
Middle of the Linked List
Description: Given a non-empty, Singly Linked List with head node, return a middle node of the linked list. If there are two middle nodes, return the second middle node.
Example 1Input: [1, 2, 3, 4, 5]
Output: Node 3Example 2Input: [2, 4, 6, 8, 10, 12]
Output: Node 8
This would be a fairly easy question if the data structure were an Array. With a Swift Array, you get to know the length of it by calling its count property. However, there is no such property in a linked list. The only way for us to get the length of a linked list is to traverse it. An intuitive way is that we traverse the linked list to get its count and then divide the count by 2. Subsequently, we go through the linked list again and return the node when it reaches the count/2 position. This operation takes O(2N).
Just a side note that according to Apple documentation, calculating count of a Collection can be O(N) operation unless the Collection guarantees random-access performance. Therefore, count is a O(1) for the Collection such as Array that conforms to RandomAccessCollection.
Back to the question, let’s take a step further, say the interviewer asks you for a one-pass solution. What potential optimizations can you think of?
A one-pass algorithm generally requires O(N) time and less than O(N) storage (typically O(1)), where N is the size of the input. More details on wiki.
If we’re only allowed one shot, we need two indices to traverse through the linked list at a different speed. The first index will travel as fast as it can while the second index is devised to be half speed of the first one. In this situation, the slow index will stop in the middle of the linked list while the fast one reaches the end of it. The full solution is shown in the following.
The time complexity of this one-pass solution is accordingly O(N).
Backspace String Compare
Description: Given two strings, return if they are equal when both are typed into empty text editors. # means a backspace character. Note that after backspacing an empty text, the text will continue empty.
Example 1Input: S = "ab##", T = "c#d#"
Explanation: Both S and T become "".Example 2Input: S = "a#c", T = "b"
Explanation: S becomes "c" while T becomes "b".
The way to solve this problem should be straightforward. We can write our own formatting logics that iterate over each string in reverse order. During the iteration, we keep updating the count of backspace(#) so we know how many upcoming characters we can omit. If the backspace count is 0, we just append that character to the output of the formatting function. Otherwise, decrement the backspace count by 1 and skip the character since they offset each other. Here’s the solution.
Time complexity: O(N), where N is approximately the length of the strings which are specified to be within the range of 1 to 200. Let’s break it down. the runtime estimate starts from the formatting function. It takes O(N) to go over the string and then another O(N) to convert the character array to a string. Since we apply the function to both strings, the worst case is consequently O(4N). Dropping the constant, the runtime eventually becomes O(N).
🥉 Writer’s selection of the week
Description: Design a stack that supports push, pop, top, and retrieving the minimum element in constant time.
ExampleMinStack minStack = new MinStack();
minStack.getMin(); // return -3
minStack.top(); // return 0
minStack.getMin(); // return -2
The protocol lists the functions we have to implement in MinStack. An important requirement requested here is to get the minimum element in constant time, which means iterating through the data structure to find the smallest value won’t work as a qualified solution.
In order to retrieve the minimum value in the stack in constant time, the data stored for each element has to be not only its value but also the then-minimum value at the time we pushed it. This way, we won’t lose track of the minimum value when we perform other operations. Here are a couple of choices of the data structures for this purpose:
- Create two arrays, one of which stores the value while the other one tracks the minimum value at the time a new value is pushed.
- Create an array with a tuple as the element that stores value and the then-minimum value.
- Create a data structure — Node that stores value and the then-minimum value as its data. A new node is appended to the array when new value pushed and is removed when popped.
All of the above-mentioned data structures work, however, I would choose the last one since is a good chance to practice our system design skills by applying our object-oriented programming knowledge.
In the real world of software development, programming logics could become extremely complex. In the case, we want to utilize object-oriented programming characteristics to model our concepts and to keep them organized.
Since Swift offers Array operation methods, we don’t have to worry too much about push(_:), pop(), or top() functions of MinStack. The only function that we care about is getMin(). Taking advantage of Node, the runtime taken to get the minimum value of the stack is O(1).
Optional property — last of Swift Collection returns the last element of the collection if it exists. Time complexity of the operation is O(1).
Diameter of Binary Tree
🥈 Writer’s selection of the week
Description: Given a binary tree, you need to compute the length of the diameter of the tree. The diameter of a binary tree is the length of the longest path between any two nodes in a tree. This path may or may not pass through the root.
ExampleGiven a binary tree 1
4 5Output: 3
[4, 2, 1, 3] or [5, 2, 1, 3] forms the longest path.
Note: The length of the path between two nodes is represented by the number of edges between them.
For the problem, we can divide it into three subproblems.
- How to calculate the longest one-way length that associated with a tree node? Let’s define a leg as a one-way path that does not pass the local root, aka the starting node.
- How to find the local longest path of the node? It consists of up to two non-overlapping legs since it may or may not pass the root. That being said, legs that share the same root may not be included simultaneously.
- How to find the longest path of the tree? It will be obtained by comparing the local longest paths.
The first subproblem is straightforward. Taking the above example, we can see the root has two possible left legs [2, 4] and [2, 5] while it has only one right leg . Hence, the local longest path can be derived from the equation:
local longest path = 1 + longest left leg + longest right leg
Where 1 counts the root node itself. In this case, the longest left leg is either [2, 4] or [2, 5] with a length of 2 and the longest right leg is  with a length of 1. The sum of them is 4, but since the question asks for a count of edges, we have to minus 1 so it ultimately becomes 3.
Let’s do another practice, take 2 as a local root node. Its longest left leg is  and the longest right leg is . Both have a length of 1. The longest path then becomes 2 (1 + 1 – 1 = 2).
Done with the first two subproblems, we continue on the last one, which is to find the longest path in the entire tree. To accomplish it, we can simply pass a variable to track it, see the solution below.
Time complexity: O(N) as we traverse through the entire tree. Each node is accounted for once in calculating the local longest path.
Last Stone Weight
Description: We have a collection of stones, each stone has a positive integer weight. Each turn, we choose the two heaviest stones and smash them together. Suppose the stones have weights
x <= y. The result of this smash is:
x == y, both stones are totally destroyed;
x != y, the stone of weight
xis totally destroyed, and the stone of weight
yhas new weight
y — x.
At the end, there is at most 1 stone left. Return the weight of this stone (or 0 if there are no stones left.)
ExampleInput: [2, 7, 4, 1, 8, 1]
The new stones array is [1, 1, 2, 4, 7, 8] after sorted.
We smash 7 and 8 to get 1. Re-sort the array to get [1, 1, 1, 2, 4].
We smash 2 and 4 to get 2. Re-sort the array to get [1, 1, 1, 2].
We smash 1 and 2 to get 1. Re-sort the array to get [1, 1, 1].
We smash 1 and 1 to get 0. Re-sort the array to get .
Then we get the last stone weight is 1.
Note that the array length of the stones is in the range of 1 to 30. The weight of the stone is in the range of 1 to 1000.
Since the note indicates that the range of the stones isn’t too wide to handle, we can merely follow the explanation to write a solution.
Similar to the Happy Number problem we talked about last week, it’s hard to estimate the runtime of the solution. FYI, the solution above is faster than 91% of Swift entries on LeetCode. One thing we can possibly make effort to optimize is to reduce the use of sorting given its computationally expensive nature. Here we re-sort only if the two stones are different in weight.
🥇 Writer’s selection of the week
Description: Given a binary array, find the maximum length of a contiguous subarray with equal number of 0 and 1.
Example 1Input: [0, 1]
Output: 2Explanation: [0, 1] is the longest contiguous subarray with equal number of 0 and 1.Example 2Input: [0, 1, 0, 0, 1, 0, 1, 1, 0, 0]
Output: 8Explanation: [0, 1, 0, 0, 1, 0, 1, 1] from index 0 to index 7 and [1, 0, 0, 1, 0, 1, 1, 0] from index 1 to index 8 longest contiguous subarray, both have a length of 8.
Long story short, we have to find the longest subarray with the same counts for 0 and 1. A brute force approach running nested for-loops may be possible, yet it is taking too much time — O(N²) to be an acceptable solution.
In order to efficiently find all subarrays with equal numbers of 0s and 1s, a Dynamic Programming approach is recommended. First off, we need a variable — count to deal with it. While traversing the input array, we do
count += 1for all the 1s and
count -= 1 for all the 0s. Apparently, whenever the count returns to 0 means we have the same numbers of 0 and 1. In Example 2, [0, 1] can be categorized as a contiguous subarray since the count retrieved after traversing the subarray is 0.
count is 0 at start (before index 0)
count —= 1, count is -1 at index 0.
count += 1, count is 0 at index 1.
Besides, there are other points that can indicate the subarray have the same counts of 0 and 1. Again in Example 2, [0, 1, 0, 1, 1, 0] from index 3 to index 8 can be another contiguous subarray though the count retrieved after traversing the subarray is -1. What happened?
After close scrutiny, I found out that the count is -1 before the start of the subarray and after the end of the subarray. That said, the subarray has zero net increase to the count, which implies it has the same counts of 0 and 1. If we apply this rule to [0, 1] subarray, we’ll find it has zero net increase to the count as well. Therefore, we can conclude the rule to follow is to find out all subarrays that have zero net increase to the count.
To complete the task, we still need a Dictionary to track the count value in order to calculate the net increase. Whenever a subarray with zero net increase appears, we want to get the length of it. The length will be compared with maxLength to obtain the longest subarray throughout the array.
The solution with explanation is shown below. As I intended to use shorter paragraphs to illustrate this complicated question, you could find more details on this LeetCode article, where we refer to for our solution.
Time complexity: O(N) as we loop through the array. The lookup time for the Dictionary (Similar to HashMap in Java) is O(1).
Perform String Shifts
Description: You are given a string containing lowercase English letters, and a matrix
shift[i] = [direction, amount].
0for left shift or
1for right shift.
amountis the amount by which string is to be shifted.
ExampleInput: s = "abcdefg", shift = [[1, 1], [1, 1], [0, 2], [1, 3]]
[1, 1] means shift to right by 1. "abcdefg" -> "gabcdef"
[1, 1] means shift to right by 1. "gabcdef" -> "fgabcde"
[0, 2] means shift to left by 2. "fgabcde" -> "abcdefg"
[1, 3] means shift to right by 3. "abcdefg" -> "efgabcd"
The solution for this should be simple. Since left shifts can offset the right ones, we can process all the input shifts to get the overall direction and amount before shifting the string considering that string shifting is a relatively expensive operation. Moreover, if the shift amount is a multiple of the string length, we don’t need to shift the string because it’ll eventually turn back to its original string. Having these two mechanisms, we are ready for the solution.
Time complexity: O(N + M) where N is the length of the shift array and M is the length of the string. We spend O(N) to process the input shifts. After the total shift is derived, we spend O(M) in formatting the output string. To be specific, it takes O(M) to separate the array to subarrays and another O(M) to merge two subarrays, thus the sum is O(2M). Ignoring the constant, we note it as O(M).
Nice work! Another week well-spent on coding. We have completed the second-week code challenge. Developers looking for more resources of the code challenge, you can download them from my Github. It will be updated until the end of the 30-day challenge. Also feel free to contribute to my Github or to send me emails if you have other questions.