Q#94: Coins from a row — a game

Coins from a row

Note: I believe it should be free and readily available for everyone to gain value from data, hence I pledge to keep this series free regardless of how large it grows.

Suppose you are creating an allocation game with the following parameters:

  • There are two players sitting across a row of bills with varying dollar values
  • In each turn, one of the players will select either the first or last bill from the row, receiving the value shown
  • You can assume that there are an even number of bills (meaning each player receives the same # of bills)
  • Each player will receive the maximum bill on either end of the array in a given turn

Given this information, and a set of bills in an array, write code simulate the projected returns of this game between each player.

For example:

bills = [1, 1, 2, 20]

# Output =

# Player one: (20 + 1) = 21

# Player two: (1 + 2) = 3

# P1 chooses 20, P2 left with 1, P2 chooses 2, P1 left with 1

TRY IT YOURSELF

ANSWER

Alright, this is somewhat of a niche problem. Simulations are useful in evaluating possible scenarios and outcomes so they are beneficial for Data Scientists in the world of business. The first thing to solve this problem is to define the allocation game.

The game involves selecting bills from a row of varying dollar values, with each player trying to maximize their returns. The game follows these rules:

  1. There are two players sitting across a row of bills with varying dollar values.
  2. In each turn, one of the players will select either the first or last bill from the row, receiving the value shown.
  3. We can assume that there are an even number of bills, meaning each player receives the same number of bills.
  4. Each player will receive the maximum bill on either end of the array in a given turn.

Let's write Python code to simulate this game and calculate the projected returns for each player.

def max_returns(bills):
n = len(bills)

# Initialize a 2D list to store the projected returns for each player.
# dp[i][j] will represent the maximum amount player i can get if the bills are from index j to index (j+i).
dp = [[0] * n for _ in range(n)]

# Initialize the diagonal elements with the bill values since there's only one bill to choose from.
for i in range(n):
dp[i][i] = bills[i]

# Fill the dp table using dynamic programming.
for i in range(1, n):
for j in range(n - i):
# Player 1 chooses the first bill, and Player 2 gets the remaining.
left_option = bills[j] + dp[i - 1][j + 1]
# Player 1 chooses the last bill, and Player 2 gets the remaining.
right_option = bills[j + i] + dp[i - 1][j]
# Player 1 aims to maximize their return, so they choose the option that gives them more.
dp[i][j] = max(left_option, right_option)

# The final result will be in dp[n-1][0], where n is the number of bills.
player1_return = dp[n - 1][0]
player2_return = sum(bills) - player1_return

return player1_return, player2_return

# Example usage:
bills = [1, 1, 2, 20]
player1_return, player2_return = max_returns(bills)

# Print the results
print("Player one:", player1_return)
print("Player two:", player2_return)

In the code above, we use dynamic programming to calculate the projected returns for each player. The max_returns function takes a list of bills as input and returns the projected returns for Player 1 and Player 2. In the example provided, Player 1 is projected to receive 21 dollars, while Player 2 is projected to receive 3 dollars.

Plug: Checkout all my digital products on Gumroad here. Please purchase ONLY if you have the means to do so. Use code: MEDSUB to get a 10% discount!

Tips and Donations

--

--