Leet Code: Two City Scheduling

Adam Garcia
The Startup
Published in
6 min readJun 4, 2020

Background

I have been intrigued by Leet Code style questions for a while. I have been very inspired by channels like Back to Back SWE, who goes beyond the code to develop a deep understanding of what is happening.

This is my attempt to give back.

Problem Statement

Two City Scheduling — Prompt

The Two City Scheduling problem requires that you fly out 2Npeople equally into 2 different cities, such that both cities have N people. Differing costs are incurred by placing a person in either city. Thus, you want to design an algorithm that seeks the minimum cost.

Approach

While Leet Code defines this problem as Easy, it was anything but that for me initially.

After stumbling around for a while, I realized that there are two main approaches to this problem:

  1. Greedy: By sorting by the difference in sending a person to City A over City B, we can then send the first half of people in this sorted array to city A, and send the second half of people to city B.
  2. Dynamic programming: This problem can be understood as making a choice in sending a person to either City A or City B. After making one of those choices, you need to send the rest of the people to cities in an optimal manner.

This post will be discussing the latter of the two approaches.

Conceptual Analysis

Overview

At the heart of Dynamic programming is a recurrence relation (RR), which describes the solution to a problem into 2 steps:

  1. Make a choice in how to act at the present moment.
  2. Aggregate the result of making this choice to the result of making all previous choices.
  3. Perform steps 1 and 2 over the entire decision space, returning the minimum of these decisions. This then becomes the most optimal choice.

I think that people get caught up in figuring out what the memoization table (DP table) should look like. However, it is as simple as finding all of the variables of control possible.

So, applied to this problem, the decision space represents the minimum cost by putting ‘x’ people into city A, and by putting ‘y’ people into city B.

That is what the dp table represents. Because I have two variables of control, my table is a 2D matrix. I like to give my DP table an appropriate name which helps me to remember what my goal actually is.

minCost[x][y] ==>  the minimum cost of sending ‘x’ people to city A and ‘y’ people to city B.

Note that minCost[N][N] is actually the answer to the problem. What is the minimum cost of sending N people to city A and Npeople to city B.

Recurrence Relation

So now let’s look at the recurrence relationship (RR) for this problem. The RR expresses how I can express a current state based on past results.

At the current iteration (sending person i into a city), I can make two choices. For a given person, I can:

  1. Send him to City A. The cost incurred by this decision will be costs[i][0]. In that case, I will still need to place x-1 more people into city A, and I will need to place y people into city B.
  2. Send him to City B. The cost incurred will be costs[i][1]. In that case I will still need to place x people into city A, and I will need to place y-1 people into city B.

In terms of the Recurrence Relationship, these two choices translate into the following:

  1. costs[i][0] + dp[x-1][y]
  2. costs[i][1] + dp[x][y-1]

We see that the notation i is really simply one out of convenience. Note that the i'th person in the computation of dp[x][y] is really the x+y'th person!

So i = x + y!

Because you want to minimize the incurred costs, the recurrence relation is the minimum of your two choices:

Therefore, the recurrence relation is:

dp[x][y] = min(
costs[x + y — 1][0] + dp[x-1][y], // Choosing City A
costs[x + y — 1][1] + dp[x][y-1] // Choosing City B
)

Code Generation

Now that we have the Recurrence Relation out of the way, all we need to do now is fill the dp table accordingly.

There are many ways to approach this algorithm. The two primary ways are:

  1. Graph traversal algorithm (DFS/BFS)
  2. Iteratively building up the solution space, towards your solution which is dp[N][N]

However, in a Dynamic Programming/ Recursive approach, the DP table is not enough. You need your base cases. These represent the set of minimum decisions that need a concrete answer. From these concrete values, the recurrence relationship can build up the rest.

The immediately obvious base cases are the following:

  1. dp[0][0]

The total cost of sending 0 people to city A and 0 people to city B. Of course, the cost is 0! (← That was an exclamation mark out of excitement :) , not 0! which is 1)

2. dp[x][y]

The whole point of using a DP table is to memoize already computed results. This is done so that unneeded work is avoided, and the computational complexity is minimized. So, if dp[x][y] has already been computed, just return the result of this.

3. x == 0 or y==0

While this isn’t a base case, it is important to note that the decision space (of length 2) only applies if I can send a person to city A OR city B.

If y == 0, that means that nobody is sent to city A, which means that dp[x][y] is simply according to the formula above for placing someone in city A. The same holds true if x == 0. As one can see, this is necessary so we do not try accessing dp[-1][y] or dp[x][-1] . If y ==0, then we cannot make the choice of placing someone in city B!

dp[x][y] = min(
costs[x + y — 1][0] + dp[x-1][y], // Choosing City A
costs[x + y — 1][1] + dp[x][y-1] // Choosing City B
)

Code

Remember, all we are doing is building up this DP table, to then return DP[N][N] . There are two ways of doing this:

  1. Recursion using a DFS algorithm.
  2. Iteration

Depth First Search Approach

class Solution:
def minCostHelper(self, costs, x, y, minCost):
# minCost[x][y]: sending x people to city A and y people to city B.
if x == 0 and y == 0:
return 0
if minCost[x][y]:
return minCost[x][y]

def sendToCityA():
minCost[x][y] = self.minCostHelper(costs, x - 1, y, minCost) + costs[x + y -1][0]
return minCost[x][y]
def sendToCityB():
minCost[x][y] = self.minCostHelper(costs, x, y - 1, minCost) + costs[x + y - 1][1]
return minCost[x][y]

if y == 0:
return sendToCityA()
if x == 0:
return sendToCityB()

minCost[x][y] = min(sendToCityA(), sendToCityB())
return minCost[x][y]


return 0
def twoCitySchedCost(self, costs: List[List[int]]) -> int:
N = len(costs) // 2
minCost = [[0] * (N + 1) for x in range(N + 1)]

return self.minCostHelper(costs, N, N, minCost)

Iterative Approach

class Solution:
def twoCitySchedCost(self, costs: List[List[int]]) -> int:
N = len(costs) // 2

minCost = [[0] * (N + 1) for x in range(N + 1)]

for x in range(N + 1):
for y in range(N + 1):
if x == 0 and y == 0:
minCost[0][0] = 0
continue
def sendToCityA():
return minCost[x-1][y] + costs[x+y-1][0]

def sendToCityB():
return minCost[x][y-1] + costs[x+y-1][1]

# This means I can only send to city B
if x == 0:
minCost[x][y] = sendToCityB()
continue

if y == 0:
minCost[x][y] = sendToCityA()
continue

minCost[x][y] = min(sendToCityA(), sendToCityB())

return minCost[N][N]

That is it. The Iterative or DFS approach is just a way to build up this DP table.

I personally prefer the Iterative approach, but I hope that you can see that we are simply trying to compute the optimal solution by breaking the problem down into simpler terms where I can simply focus on what I want to do at the present moment, letting computer memory remember what is in the past.

I hope this helps!

--

--