Optimistic Rollup is Not Secure Enough Than You Think — Game Theoretic Approach for More Verifiable Rollup[EN]

Aiden Park
Tokamak Network
Published in
51 min readMay 20, 2021

Intro

Special thanks to Vitalik Buterin, Ed Felten for their insight, and Kevin, Lakmi for feedback and reviews.

Optimistic rollup (hereinafter referred to as rollup) is a layer 2 solution that burst on the Ethereum community which was struggling to solve the problem of Plasma. The biggest advantage of rollup is that unlike plasma, Fraud Proof ensures security for all users without monitoring rollup, if they have at least one honest verifier. This was a major factor in gaining the full support of the Ethereum community, despite the downside that rollup sacrifices scalability significantly compared to Plasma.

Rollup is a very attractive Layer 2 that can operate safely with literally only one Honest Verifier. But since we, living in the blockchain world, are very suspicious people, the question arises as to whether we can be sure that the rollup truly functions safely. In fact, these questions are not very new. With the concept of Verifier’s Dilemma, for quite some time, many researchers have been discussing whether verification can be performed accurately for various fields that require verification. Similar discussions have been made about the rollup, and opinions on this matter consist largely of two divisions. One argues that we don’t have to consider the verifier’s Dilemma ‘at all’ in rollup, while the other argues that we must take into account the verifier’s Dilemma in rollup for safe long-term operations.

Which opinion is correct? From the point of view of the user, not the designer, can we really safely use rollup? What should we do if it’s definitely not safe? This article addresses all these questions, and provides a concrete method on which direction to go in the long run to ensure the security of rollup.

All of the discussions covered in this article can be applied to any scalability solution that uses Fraud Proof, as well as any Optimistic rollup.

Verifier’s Dilemma

There are other concepts that must be addressed before solving the issue of rollup security, main of which is the verifier’s dilemma mentioned above. In general, the verifier’s dilemma is a series of problems that occur when one participant performs some operation in the blockchain and another participant verifies the work to maintain the safety of the network or protocol, but the utility of verification is unclear.

The concept of verifier’s dilemma which was first discussed in this article, can be easily explained as follows. In blockchains such as Bitcoin and Ethereum, when the miner who mines the next block propagates the block to other nodes in the network, all other nodes verify the validity of the block. This is the original intention of the PoW consensus, and in terms of the safety of the network, it is desirable that all other nodes that have synchronized the block must verify it. However, since the computing power of individual nodes is finite, it might be more profitable to allocate it for the task of mining the next block instead of verifying complex transactions. Of course, you will have to take the risk that a block whose ancestor is an invalid block cannot be accepted by the network. Therefore, individual nodes are constantly in an impasse whether it is more advantageous to perform verification or to focus only on mining without verification.

This verifier’s dilemma can be applied in a similar way for rollups. In rollup, after the sequencer executes the transactions in Layer 2, it only submits a root of the states ​​and corresponding transaction data to Layer 1.

The sequencer or operator is responsible for determining the order of the users’ transactions in the rollup, and submitting them to Layer 1. In this article, we will use the term sequencer.

The rollup prevents the sequencer from submitting incorrect state values at this time with Fraud Proof, and Dispute Time Delay (DTD). All users inside and outside of the rollup can be verifiers, and they can prevent invalid states from being finalized with fraud proof in DTD.

If fraud is proven by the verifier, the deposit of the sequencer will be rewarded entirely or partially to the verifier. However, one problem arises here. What if the sequencer always submits the correct values? No matter how much verification work the verifier performs, he wouldn’t be able to prove any fraud and soon would lose the incentive for verification. However, one might say it was the original intention, for the security of rollup that the sequencer always submits only correct values to Layer 1, but let’s see why this is a problem.

  1. The verifier always verifies the state root submitted by the sequencer.
  2. If the sequencer submits an incorrect value, the deposit will be slashed by the fraud proof, so it will continue to submit only the correct state roots.
  3. The verifier cannot be rewarded for verification work because the sequencer continues to submit only the correct values. Therefore, he would stop further verification.
  4. Since the verifier has stopped verification, the sequencer has a strong incentive to attack the rollup and take over the assets of the users.

In short, since the economic incentive for verification is strongly dependent on the sequencer’s attack behavior, the less the sequencer attacks, (i.e. the safer the rollup is,) the less the incentive for verification is. This is paradoxical, because the safer the rollup is, the more dangerous it is; and the more dangerous it is, the safer it is. According to this logic, at a glance, the verifier’s dilemma appears to be a factor that could seriously jeopardize the security of the rollup. Is the verifier’s dilemma really a threat to the rollup? Let’s dig deeper in the next section.

Super-Simple Model

To understand and solve a complex concept or problem, the best way is to always simplify the problem. Therefore, before discussing the verifier’s dilemma of the rollup in earnest, we will first look at the Super-Simple Model, which is a very simplified version of it.

The Super-Simple Model is based on Ed Felten’s article The Cheater Checking Problem: Why the Verifier’s Dilemma is Harder Than You Think.

Assumption: For all models covered in this article, it is assumed that the rationality of all participants or players is common knowledge. This means that the following two infinite propositions are true.

‘I know that you are rational.

You know that I know that you are rational.

I know that you know that I know that you are rational. … ’

‘You know that I am rational.

I know you know that I’m rational.

You know that I know that you know that I’m rational. … ..’

First, let’s consider a very simple system. There are two players in this system; one as the Asserter (A), who simply claims true or false, and the Verifier (V) who can verify A’s assertion with paying the cost of verification, or can choose to do nothing assuming that A would have asserted true. If A’s claim is false and V succeeds in verifying it, A’s deposit will be rewarded to V.

Conversely, if V does not accuse A despite a false assertion, V’s deposit will be rewarded to A. In other words, the purpose of this system is to make the assertion to be always true via verification even if A makes a false assertion.

There are two threats we need to consider in order to achieve this goal.

  1. Bribery attacks
  2. Verifier’s laziness

A bribery attack means that A bribes V in advance not to verify a false claim. It can be resolved relatively easily by increasing the value of the collateral of A and V, or by increasing the number of stakeholders.

The verifier’s laziness literally means that V decides not to verify A’s claim. If V does not verify the claims, A can continue to claim falsehood as much as possible. Unlike a bribery attack, the verifier’s laziness is difficult to solve by simply increasing the value of the collateral or the number of stakeholders. Let’s dive deep into the reasons below.

In order to prevent the verifier’s laziness, first it is necessary to understand how the verifier chooses the verification or non-verification strategy. The expected payoff of each strategy under the following conditions is as follows:

  • R = A’s Deposit, if A claims false and V succeeds in verification, is rewarded to V.
  • L = V’s deposit, if A claims false and V fails to verify it, it is awarded to A.
  • X = Probability of attack by A
  • C = Verification cost
  1. Expected payoff of verification: R*X-C
  2. Expected Payoff of non-verification: -L*X

We can observe that if V’s verification payoff is greater than the non-verification payoff (R*X- C > -L*X), V will always verify A’s assertion.

In other words, if X > C/(R+L), we can prevent the verifier’s laziness. However, if X < C/(R+L), V will not verify A’s claim (If the two terms are the same, the expected payoff becomes indifferent. i.e. V may or may not verify A’s claim).

It seems like what needs to be done is to make X bigger than C/(R+L). Then V will always verify A’s claims and this simple system will always remain secure!

However, the problem remains that X means A’s attack probability, and the value of X is determined by A. If A properly adjusts the frequency and timing of attacks so that X is sufficiently low, V will choose not to verify because the verification utility is lower than the non-verification utility. This entails that the security of this simple system could be compromised.

One could argue that “the system can be more secure if you increase R+L to the maximum, and lower the effective X value enough.” In other words, increase the capital requirement of A and V so that they must deposit more on this system. However, this is clearly limited in the following two aspects.

  1. Regardless of how high the R and L values are, A can calculate the effective attack probability as soon as the value(R, L) is determined in advance. This allows A to still reduce V’s incentives for verification.
  2. Raising the capital will limit the participation since it increases the barrier to entry to participate in the system.This will soon have a very negative impact on the introduction and expansion of the system.

Another idea is that increasing the number of verifiers can solve this dilemma. Simply increasing the number of verifiers is very helpful in preventing bribery attacks; however, it has rather negative effects on solving the verifier’s laziness.

The reason it assists in preventing bribery attacks is that the asserter (A) must pay each verifier (V) a reward greater than R in order to bribe the verifier. If the number of verifiers is N, and the total profit obtained by claiming falsehood is M, then all verifiers can be bribed only if M/N > R. Therefore, the larger the N, the more difficult it is for the asserter to carry out a bribery attack.

Theoretically, we can ensure that rollup is secure if there is only one verifier in it, but on the contrary, since it can be vulnerable to bribery attack if there are only a few active verifiers, it is very important to have as many active verifiers as possible.

Conversely, increasing the number of verifiers is not very helpful in preventing the verifier’s laziness because when the number of verifiers increases, the expected payoff of non-verification remains the same, while the expected payoff of verification strategy decreases in proportion to the number of verifiers. For example, if there are two verifiers and they all succeed to verify, the reward R will be divided by any ratio (K), which will be a lower amount than the reward R when there is only one verifier (R/K).

According to this simple model, the verifier’s dilemma is a fairly complex problem, and solving it seems quite difficult. However, this model doesn’t exactly imitate the rollup. Could this be solved in rollup? Let’s take a closer look at this in the next chapter.

Super-Simple Model in Optimistic Rollup

Based on the simple model discussed above, let’s construct a Super-Simple Model in the rollup and see which problems arise in the rollup regarding the verifier’s dilemma. In fact, there won’t be many changes from the previous model. The asserter becomes a sequencer (S), and the verifier (V) remains the same. In this case, it is assumed that the verifier is the stakeholder of the rollup. In other words, a verifier is someone who has certain economic/non-economic assets in the rollup, or who periodically obtains certain economic/non-economic returns through the rollup.

At this time, the expected payoff for verification and non-verification of V under the following conditions are as follows.

  • R = S’s deposit; V is rewarded with R when V succeeds in verification.
  • L = V’s assets deposited in the rollup; S is rewarded with L if V fails to verify.
  • X = Probability of attack by S
  • C = Verification cost
  • F = The revenue that V earns per verification unit (e.g. when V receives a constant revenue for each transaction T in the rollup, this is called F)

However, R,L > F > C > 0.

Also, for convenience, it is assumed that F is not accumulated in L. These assumptions will apply equally to all models that will be covered later.

  1. Expected payoff of verification: R*X + F -C
  2. Expected payoff of non-verification: -L*X + (1-X)F

Note that even without verification, the verifier can still earn F if the sequencer does not attack.

In this case, just like in the previous Super-Simple Model, if the verification payoff is greater than the non-verification payoff, the verifier will always have an incentive to verify. If the sequencer’s attack probability is as follows, the verifier will always choose the verification strategy.

  • X > C/(R+L+F)

What’s different from the previous model is that only F is added to the denominator. This may have the effect of slightly reducing the sequencer’s attack probability, but the fundamental problem remains the same.

In other words, it is visible that the verifier’s strategic choice still depends on the sequencer’s attack probability, and this does not change much even when considering the new variable, the constant return (F) the verifier gets per verification unit. This is because if the sequencer attacks very intermittently or does not attack, the verifier can continue to earn full returns without paying the cost of verification.

How can we solve this problem? It has already been debated that R and L, i.e. how to increase capital requirements, have clear limits. One option we can think of is that the rollup can maintain the security of the system if only one verifier performs the verification correctly. If that is the case, if we change the number of verifiers from simple models to extended models, we could expect that at least one of them will verify. If so, can we ensure security of the rollup?

Multiple Verifiers

Before we start increasing the number of verifiers (N), let’s begin with a very simplified model with only two verifiers (V1 and V2) and a sequencer (S). In this case, the expected payoff of verification and non-verification of each verifier can be notated as follows under the conditions below.

  • R = S’s deposit; V is rewarded with R when V succeeds in verification.
  • L = Assets of V(N) deposited in the rollup; S is rewarded with L when V fails verification. (It is assumed that the two verifiers have the same L.)
  • X = Probability of attack by S
  • Y = The probability that the opponent verifier will verify (e.g., in terms of V1, if V1 is sure that V2 will always verify, then Y = 1.)
  • C = Verification cost
  • F = Revenue V(N) receives per verification unit (assuming that the F of both verifiers is the same.)

However, R, L > F > C > 0

  1. Expected payoff of verification: R*X + F-C
  2. Expected payoff of non-verification: -L*X(1-Y) + F*(1-X(1-Y))

The expressions for verification and non-verification have become quite complex, but if you grasp what the newly added part (1-Y) means, you can understand it more intuitively. The most significant feature of the rollup is that even if you don’t verify the rollup on your own, your assets could be kept safe if someone else verifies it. In other words, if Y is 1 or close to 1, it means that the other verifier will verify S, which indicates that even if you do not verify it yourself and the sequencer always attacks (X = 1), all of your assets, and additional economic interests can be preserved.

If so, it is questioned whether either or both of the verifiers will verify. To explore this more intuitively, let’s use a very simple game theoretic approach.

The expected payoff of the two verifiers’ choice of verification and non-verification strategies can be expressed in the payoff matrix as follows.

It is assumed that the reward R obtained when both verifiers succeed in verification is divided by the same ratio.

The Nash equilibrium in this game will depend on the value of each variable. For example, if the attack probability of a sequencer is high, and the expected verification reward is even higher, the equilibrium is for both verifiers to verify (V, V). If the attack probability is very low and the expected loss of non-verification is not significant, the equilibrium is for both verifiers not to verify (NV, NV). In addition, when the attack probability is neither very high nor very low, the equilibrium is for only one verifier to verify (V, NV) (NV, V). This can be expressed as follows.

  1. (V, V): X > 2C/R
  2. (V, NV) (NV, V): 2C/R > X > C/(R+L+F)
  3. (NV, NV): X < C/(R+L+F)

Let’s look at each case through a simple example. In this example, it is assumed that the values of the variables other than X are as follows.

  • R = 100
  • L = 100
  • F = 5
  • C = 1

In this case, assume that the attack probability X is greater than 2C/R = 0.02, which means X = 0.05.

In this case, since the verification strategy for both verifiers is a dominant strategy, the equilibrium is (V, V). This is a very desirable situation, but since it is assumed that the sequencer’s attack probability is very high, in reality, this situation won’t occur easily.

Next, suppose that X is less than 2C/R = 0.02 and greater than C/(R+L+F) = 0.0049, which means X = 0.005.

Interestingly, when both verifiers choose the same strategy, they can’t maximize their payoff. i.e. choosing different strategies is the equilibrium.

For example, if verifier 2 chooses the verification strategy, verifier 1 will choose the non-verification strategy and if verifier 1 chooses the verification strategy, verifier 2 will choose the non-verification strategy. With similar logic, if verifier 2 chooses the non-verification strategy, it is most reasonable for verifier 1 to choose the verification strategy and if verifier 1 chooses the non-verification strategy, then verifier 2 chooses the verification strategy.

In other words, there are two equilibria in this game, (NV, V) and (V, NV). So, which equilibrium is chosen in this game? This is determined by factors such as pre-commitment, and credibility. For example:

Let’s say that verifier 2 has chosen the non-verification strategy. Verifier 2 openly affirms to verifier 1, “I do not have equipment with the requirements necessary for verification, so I will not verify at all in the future.” However, verifier 1 currently has all the required equipment for verification. The most rational option that verifier 1 can choose in this case would be the verification strategy which will offer a relatively higher expected reward for a small verification cost, rather than a non-verification strategy for no cost at all. we can see that at some point where X is neither high enough nor low enough, only one of the two verifiers will choose the verification strategy.

Finally, suppose that the X is less than C/(R+L+F) = 0.0049, which means X = 0.0001.

In this game, the equilibrium is (NV, NV) because the non-verification strategy is the dominant strategy for both verifiers. As we continue to emphasize, the sequencer can acquire information about each variable in advance, so that the effective attack probability can be easily calculated. Therefore, the sequencer will attack with the probability that not even one verifier will choose a verification strategy. In other words, if the sequencer attacks, it will attack with a lower probability than C/(R+L+F).

In summary, even in a model in which the number of verifiers is increased to two, each verifier chooses the verification or non-verification strategy according to the attack probability of the sequencer, and as the attack probability decreases, the number of verifiers who choose the verification strategy decreases. This does not change considerably even if the number of verifiers is increased to N. As the number of verifiers increases, the amount of expected verification rewards will decrease, and even if the probability of attack is not very low, the number of verifiers who continue to choose the verification strategy will decrease because only one verifier who chooses the verification strategy can protect all other users’ assets and interests. In addition, when the probability of attack is very low, even one verifier will not choose the verification strategy.

This is a very shocking result. Regardless of the number of verifiers, the more safely the rollup operates over the long term, the more active verifiers will continue to decrease and converge to zero.

Is There No Verifier’s Dilemma in Optimistic Rollup?

Despite the fact that the rollup has several issues with security due to the verifier’s dilemma as discussed above, there is an argument that the verifier’s dilemma in the rollup is just a trivial issue, and there is no need to implement an additional mechanism to deal with it. To begin with, some parts of this argument are valid, and some are not. In general, the rationale behind the argument; it is not necessary to take into account the verifier’s dilemma in the rollup, can be summarized into four main reasons.

  1. Token Holder
  2. Dapp Builder — We can substitute it with all participants who earn a certain amount of revenue (e.g Yield farming) per L2 transaction.
  3. Altruist
  4. Fast Withdrawal

Let’s examine each reason in the above mentioned in order and see if they can actually help to solve the verifier’s dilemma.

Token Holder

The first reason it’s not necessary to take into account the verifier’s dilemma in the rollup is that users with many tokens (or assets) in Layer 2 will have a high incentive to perform verification.

However, users with many tokens are basically just verifiers with a relatively high L in the simple model discussed above. Therefore, this has no other meaning than slightly lowering the sequencer’s attack probability.

One thing to note here again is that simply increasing L, which is the amount of assets deposited by the verifier in the rollup, or increasing the sequencer’s deposit R, is not very helpful in solving the verifier’s dilemma.

DApp Builder

Another reason is that, if a DApp is operated in a rollup, the rollup must be operated safely to ensure continuous profits for the DApp operator, so the incentive for verification of stakeholder of that DApp will be high. However, this is also discussed previously. Despite the fact that the revenue (F) per verification unit is high, it can be safely earned regardless of verification, if the sequencer does not attack. i.e. the verification strategy is also chosen according to the attack probability of the sequencer. In other words, even though the builder of the DApp makes huge profits by operating a large DApp, this does not lead to a guarantee that the builder will always verify the rollup.

Altruist

source: twitter

The next reasoning is altruism. Not everyone in this world is always selfish and makes choices that maximize only their own interests. If so, numerous actions such as philanthropic donations and charities won’t exist (of course, this can also be explained as having a greater non-economic utility than the economic cost of donating or serving).

In short, the point of the argument is that even though verification is an economic loss, someone who is altruistic in this ecosystem would verify the rollup anyway. For example, there are people who do not earn any mining rewards on the Bitcoin or Ethereum network, but store all the block data, and propagate it to other nodes. They do not get any economic benefits from this behavior, but they do it altruistically for no economic gain.

However, this also can lead to an issue. The fact that the number of Ethereum archive nodes is too small is a problem that has been frequently raised. As Ethereum blocks increase, state data continues to grow at a rapid pace, and in the long run it is unsustainable to simply expect altruistic participants to maintain archive nodes without any incentives.

While this seems to make sense at first glance, it should be noted that we are searching for an approach to resolve or mitigate the verifier’s dilemma. In other words, altruistic verifiers are very helpful from the point of view of the entire network when the rollup is actually operated, but when designing any mechanism or tool to ensure the security of the network, they can neither be the rationale nor the basis of that mechanism.

Let’s assume you are the architect of Ethereum. How would you respond if someone asks you the question; what can we do when the incentive to record and store block data is not clear? If the answer is that there are altruistic people in the world and these people will somehow store all the data, it would not be a convincing answer.

Vitalik Buterin also argued in his article that we need more explicit reasoning to design verification incentives for rollup as mentioned below:

Auditing incentives — how to maximize the chance that at least one honest node actually will be fully verifying an optimistic rollup so they can publish a fraud proof if something goes wrong? For small-scale rollups (up to a few hundred TPS) this is not a significant issue and one can simply rely on altruism, but for larger-scale rollups more explicit reasoning about this is needed.

Fast Withdrawal

The last claim is fast withdrawal. To understand why fast withdrawal is presented as the basis for the claim, that it is not necessary to consider the verifier’s dilemma, we first need to figure out what fast withdrawal is.

Fast withdrawal is an alternative way to withdraw tokens from L2 to L1 so that users can avoid the long withdrawal time due to the DTD (Dispute Time Delay) in rollup. Fast withdrawal simply works like this when users want to withdraw some amount of tokens from layer 2 to layer 1, an intermediary buys these tokens in layer 2 and sends them to the user in layer 1 except for the fast withdrawal fee.

Let’s take a closer look at fast withdrawal with an example.

  1. Alice wants to withdraw 10 ETH from L2 to L1 paying a fee of 0.1ETH.
  2. Alice transfers 10 ETH to the fast withdrawal market contract.
  3. Ivan confirms and verifies the withdrawal request and sends 9.9 ETH from L1 to Alice.
  4. After the DTD required for the withdrawal, Ivan will acquire 10ETH in the market contract of L2.

The important point here is that in the process of fast withdrawal, Ivan verifies whether Alice’s withdrawal is a valid request or not. This means that Ivan actually verifies the states of rollup are correctly configured. A problematic point in the verifier’s dilemma was that the verifier’s verification rewards depended on the probability of attack by the sequencer. However, in the case of fast withdrawal, Ivan seems to have an incentive to verify regardless of the sequencer’s attack behavior!

Based on this, let’s define the expected payoff of Ivan according to the Super-Simple Model.

First, let’s assume that there are already a number of general verifiers(N) other than Ivan in the rollup. Assuming that fee of fast withdrawal is F, Ivan’s expected payoff for verification is as follows.

  1. Expected payoff of verification: R*X + F-C
  2. Expected payoff of non-verification: -L*X + (1-X)F

What’s interesting here is that unlike for the other verifiers in the rollup, the fact whether or not the remaining verifiers have verified, does not affect Ivan’s expected payoffs.The reason for this is the special nature of fast withdrawal; Ivan has to transfer his tokens in L1 as soon as possible after receiving the fast withdrawal request from Alice. However, if Alice’s withdrawal request was found to be invalid by another verifier during the DTD, Ivan will not be able to get his tokens back. This is because tokens in L1 have already been transferred to Alice. In other words, in order for Ivan to safely process a fast withdrawal, he must verify by himself rather than relying on other verifiers.

This is an interesting fact; if there are only two verifiers in the rollup and one is a fast withdrawal intermediary like Ivan, and the other is a general verifier (e.g token holder, DApp operator, etc.), then Ivan is more likely to choose a verification strategy than other verifiers. Let’s take a closer look at this through the following payoff matrix.

As mentioned earlier, even if the opponent verifier chooses the verification strategy, when the sequencer attacks via a fast withdrawal, it is impossible for Ivan to effectively defend the sequencer’s attack and protect his assets. Therefore, the expected payoffs of Ivan are equally -L*X+(1-F)X. This implies that if only one of the two participants, Ivan or the opponent verifier, must choose a verification strategy (similar to the situation where the attack probability of the sequencer is neither very high nor low in a model with two general verifiers), Ivan has a higher incentive to choose a verification strategy than the opponent verifier.

More specifically, when 2C/R > X > C/(R/2+L+F), the equilibrium becomes (V, NV). Under the same conditions in the model of only two verifiers, the equilibrium was (V, NV) (NV, V). Specifically, in a situation where the probability of attack is neither high nor low, Ivan will always choose the verification strategy, and the opponent verifier will always choose the non-verification strategy. Let’s look at this through a specific example.

Under the conditions below, the payoff matrix of Ivan and the verifier is composed as follows.

  • R = 100
  • L = 100
  • F = 5
  • C = 1
  • X = 0.007

Had Ivan been a general verifier, in this case the equilibrium would have been (V, NV) (NV, V). However, since Ivan is the intermediary of fast withdrawal, the verification strategy is the dominant strategy. Therefore, the other verifier will always choose the non-verification strategy and (V, NV) will become the equilibrium.

In summary, it can be seen that under the same conditions, intermediaries of fast withdrawals have a higher incentive to verify than general verifiers. If someone does verify, it is very likely that it is an intermediary of fast withdrawal. However, this does not prove that Ivan will always choose the verification strategy.

Again, it is true that Ivan’s incentive to choose a verification strategy is relatively large, but in practice, the probability of the sequencer’s attack is taken into consideration when making this strategic choice. Therefore, what we need to focus on now is to find out exactly under what criteria a fast withdrawal intermediary will make such a strategic choice, and what factors are needed to make sure this intermediary chooses a verification strategy. Let’s look at this in detail in the next chapter.

Fast Withdrawal and Auditing Incentives

In this chapter, we will take a closer look at how Ivan chooses between a verification and non-verification strategy for rollup, which variables will affect that decision, and furthermore, what additional mechanisms are needed to ensure that Ivan will always verify the rollup.

Attack-Verify Game

Let’s compose a simple Attack-Verify Game between Ivan and the sequencer as follows to find out how Ivan chooses the verification or non-verification strategy.

The only players in this simple game are Ivan and the sequencer. The sequencer will request a fast withdrawal to Ivan, and at this time, the sequencer can choose whether to request a fast withdrawal for valid or invalid tokens. This is called an attack and non-attack strategy.

Ivan can choose either to verify and transfer the tokens to the sequencer in L1, or transfer it without verification. This is called a verification and non-verification strategy.

In this game, Ivan and the sequencer have different expected payoffs depending on each other’s strategic choices, which can be specifically represented by the following payoff matrix.

  • R: Deposit of the sequencer / Reward given to Ivan upon successful verification
  • L: Amount of fast withdrawal / Amount lost by Ivan upon successful attack by the sequencer
  • F: Fast withdrawal fee
  • C: Verification cost

The sequencer’s payoff in non-attack is 0 because the utility obtained from the fast withdrawal is considered to be the same as the paid fee F. Even if this is changed to a different number, the expected payoff of the sequencer (V,NA), (NV,NA) will be the same. It is also difficult to expect a significant difference compared to the expected payoff of (V,A) and (NV,A).

Assuming R,L > F > C, there is no pure strategy Nash equilibrium in this game. Let’s look at this in detail through an example.

  • R = 100
  • L = 100
  • F = 5
  • C = 1

Substituting these values into the payoff matrix table above will give the following result.

There are no dominant and dominated strategies for both Ivan and the sequencer in this game. Also, there is no pure strategy Nash Equilibrium. This is because each player’s strategic choices vary depending on each other’s strategic choices. If the sequencer does not attack, it is advantageous for Ivan not to verify, if Ivan does not verify, it is advantageous for the sequencer to attack and vice versa. This means that, depending on the situation, Ivan and the sequencer will use both strategies, rather than just one strategy.

Mixed Strategy Nash Equilibrium

So, how do Ivan and the sequencer choose between these two strategies? First we need to understand what mixed strategy Nash equilibrium means.

When using a mixed strategy, given the two strategies, the optimal approach for each player is to choose the strategy that makes the expected payoff of the opponent indifferent between two strategies. In other words, it is to find the point where the expected payoff becomes indifferent no matter what strategy the other party chooses.

If the probability of Ivan choosing the verification strategy is P, and the probability of the sequencer choosing the attack strategy is Q, then each probability (P, Q) according to the mixed strategy Nash equilibrium are as follows.

  • R*P + L(1-P) = 0*P + 0*(1-P)
  • P = L/(R+L)
  • (R+F-C)Q + (F-C)(1-Q) = -L*Q + F(1-Q)
  • Q = C/(R+L+F)

By assigning the variable values used in the previous example, the mixed strategy Nash equilibrium can be calculated as below.

  • P = 1/2
  • Q = 1/205

To put it another way, once every two turns, Ivan chooses either the verification strategy or the non-verification strategy. In the case of the sequencer, the non-attack strategy is chosen 204 times out of 205 times, and the attack strategy is chosen once. But our goal is to make Ivan choose the verification strategy 100% of the turns, instead of only once every two turns. We also want to reduce the attack probability from 1/205 and converge it to 0. What steps must we take to achieve both goals?

The most effective way is to increase L. Increasing L leads to an increase in the probability that Ivan chooses the verification strategy, while reducing the probability that the sequencer chooses the attack strategy. However, increasing R also can lead to a reduction in the attack probability, but simultaneously it decreases the probability of Ivan/verifier choosing the verification strategy. In short, if you are trying to solve the verifier’s dilemma by increasing the capital requirements in rollup, increasing L is the most effective way to achieve it. However, unfortunately there are obvious limitations when it comes to increasing capital requirements. Thus, we need a different approach to solving this.

Maximin Strategy

Through the mixed strategy Nash equilibrium we found out that Ivan and the sequencer make strategic decisions with a certain probability. However, in reality, will Ivan and the sequencer follow the exact equilibrium when choosing their respective strategies? When both players choose a strategy that maximizes their profits, the additional expected profits become relatively low, while the losses incurred by making the wrong strategic choice in the process become truly catastrophic.

For example, if Ivan chooses the non-verification strategy, he gets only +1 extra reward if the sequencer doesn’t attack, while he suffers a huge loss of -100 if the sequencer chooses the attack strategy. The sequencer also gains nothing from a non-attack, but the expected loss in the event of a failure of an attack is relatively very high.

In such an event, players generally tend to pursue security rather than maximizing profits.The strategy that maximizes strategic security is the maximin strategy.

This does not apply only to the game of Ivan and the sequencer. For instance, users who trade and hold cryptocurrency have a variety of strategic choices on how to hold tokens depending on their tendency to deal with risks. When holding and trading cryptocurrencies, generally the most convenient way is to hold them on a centralized exchange(CEX). However, this method carries the risk of losing all of the coins when the exchange is hacked or attacked. Therefore, users who prefer high risks tend to hold and trade cryptocurrency on a centralized exchange, while users who want to avoid high risks use services such as Metamask or hardware wallets instead of a CEX. Additionally, users with extremely high risk aversion tendencies may worry about the cryptocurrency itself, so they operate a full node or an archive node simultaneously. All of these users have the same expected payoff when holding coins on the exchange, but the strategic choice depends on the degree of risk aversion. i.e. it depends on strategic security, not stability.

The maximin strategy is a strategy that yields ‘best of the worst’ outcome for each player. To rephrase, irrespective of the decisions other players make, players maximize the minimum payoff amount.

Suppose that in the above example payoff matrix, Ivan and the sequencer follow the maximin strategy, not the Nash equilibrium.

Ivan’s maximin strategy will be V, the sequencer’s maximin strategy will be NA and the maximin payoff vector of the two players will be (4,0). When following the maximin strategy, Ivan and the sequencer will always choose the verification strategy and the non-attack strategy respectively.

Thus, we can come to the conclusion that whenever a fast withdrawal is processed, the rollup will be verified if Ivan and the sequencer, or Ivan alone, place the strategic security as top priority.

However, it is unreasonable to assume that all players prioritize security over stability. This is because each individual player’s disposition will be different from each other. Therefore, the approach through the maximin strategy may mitigate the verifier’s dilemma somewhat more than before, but it cannot be ensured that it has been completely solved.

Fast Withdrawal With Attention Challenge

Earlier, we saw that the verifier’s dilemma can be resolved under the assumption that each player gives strategic security a higher priority. However, since this still requires assumptions about the player’s disposition, it has also been discussed that the verifier’s dilemma has not been solved completely, but only somewhat mitigated.

What additional system would be needed to solve the verifier’s dilemma without having to make assumptions? The reason Ivan does not necessarily choose the verification strategy in the case of fast withdrawal is very simple. This is because the verification strategy is not a dominant strategy.

Hence how do we make the verification strategy a dominant strategy? This question has a very simple answer: make Ivan’s payoff for the non-verify and non-attack vector (NV, NA) less than or equal to that of the verify and non-attack vector (V, NA). We can achieve this by adopting an Attention Challenge for fast withdrawal.

Attention Challenge is a concept proposed in Ed Felten’s article, Cheater Checking: How attention challenges solve the verifier’s dilemma, and is a system that checks whether a specific verifier has verified correctly, at random and periodically. Please refer to the article written by Ed Felten or the Related Research section at the end of this post for details on how the attention challenge works.

Introducing the attention challenge into rollup will work as follows. Note that this is a very simplified explanation.

  1. The sequencer processes the transactions in the rollup and submits a hash of the state root and a random value so that the verifiers cannot know the exact value.
  2. Verifiers are selected at random as respondents to the attention challenge without knowing the state root.
  3. The verifiers who were selected in step 2, have to respond with the state root after processing the transactions the same way as the sequencer.
  4. If they do not respond within the specified time, or submits an incorrect response, their deposit will be slashed.

With the attention challenge, the payoff matrix can be changed as follows:

  • C = Verification cost + Attention challenge response cost
  • A = Ivan’s deposit for attention challenge. If Ivan fails to respond correctly to the challenge, the entire deposit will be slashed.
  • P = Probability of Ivan being selected as a respondent for attention challenge

We can easily see that if P*A > C, the verification strategy will be the dominant strategy for Ivan. In other words, if the loss from attention challenge is greater than the verification cost, Ivan will definitely choose the verification strategy. Let’s check this through a specific example.

Suppose that each variable is as follows.

  • R = 100
  • L = 100
  • F = 5
  • C = 2
  • A = 50
  • P = 0.1

Then the payoff matrix will be as follows.

In all cases, The expected payoff of the verification strategy for Ivan is higher than the non-verification strategy. Now the verification strategy is the dominant strategy. The sequencer will always choose the non-attack strategy because Ivan will always choose the verification strategy. Therefore, the (V, NA) strategy vector is the pure strategy Nash equilibrium, which means that the rollup can always be verified!

By introducing the attention challenge to general verifiers, we can make the verification strategy to be the dominant strategy for all verifiers. However, the nature of the attention challenge requires interacting with L1, so expanding the target and the probability of the attention challenge can be a huge burden at the system level. Details on this will be covered in the Trade-Off of Attention Challenge section.

Cross-Rollup Transaction

As discussed above, the intermediaries of fast withdrawal will always choose to verify the rollup, if they prefer strategic security, or if we adopt the attention challenge. The interesting fact is that there is another participant in the rollup with the exact same payoff structure as the intermediary of the fast withdrawal. This is the cross-rollup transaction intermediary (hereinafter referred to as the rollup intermediary).

The cross-rollup transaction in this article is based on the concepts covered in the following two articles.

Rollup intermediaries can provide fast token transfer services between two rollups in a way similar to the intermediaries of the fast withdrawal. For example, if we have two rollups A, and B, and Alice wants to transfer tokens from A to B through rollup intermediary Ivan, this can be done the following way:

  1. Alice wants to transfer 10 ETH from rollup A to B paying Ivan a fee of 0.1 ETH
  2. Alice transfers 10 ETH to a cross-rollup market contract.
  3. Ivan verifies the cross-rollup transaction request and sends 9.9 ETH to Alice in B.
  4. After the DTD, Ivan gets 10 ETH deposited in the market contract in A.

What you can see from this is that only the destinations of the withdrawal and the transfer are changed, from L1 to rollup B, and other aspects have not changed at all. This means that rollup intermediary Ivan also has the same expected payoff for both verification and non-verification as fast withdrawal intermediary Ivan.

Therefore, the cross-rollup intermediary Ivan’s, and the sequencer’s payoff matrix can be expressed as follows, and it is identical to that of the fast withdrawal broker intermediary, and the sequencer.

  • R: Deposit of the sequencer S / Reward given to Ivan upon successful verification
  • L: Amount of cross-rollup transfer / Loss of Ivan when the sequencer attacks successfully
  • F: Cross rollup transfer fee
  • C: Verification cost

Since the two matrices are identical, we can assume that the cross-rollup intermediary Ivan also always chooses the verification strategy based on the maximin strategy, or when adopting the attention challenge.

This is a very encouraging result as it means that the number of verifiers with relatively high incentives can expand from the fast withdrawal intermediaries to the cross-rollup intermediaries.

Trade-Off of Attention Challenge

We can force the intermediaries of the fast withdrawal and cross-rollup transfer consistently choose verification strategy by adopting the attention challenge. The attention challenge itself is a mechanism applicable not only to these intermediaries, but also to all other users of the rollup. If we apply the attention challenge in an extended way, it could be a very good strategy to maximize the security of the rollup, as the verification will be the dominant strategy for all verifiers.

However, it has a couple of blind spots.

First, since responding to the attention challenge involves on-chain transactions, the increased number of targets of the attention challenge results in an increased number of required on-chain transactions. Second, if all users of the rollup have to respond to the attention challenge, the user experience of the rollup can be seriously undermined.

Let’s discuss the first issue in detail. Expanding the target of the attention challenge inevitably means that the number of respondents to the attention challenge increases. Undoubtedly, as the number of verifiers (N) increases, the probability of being selected as the target of the attention challenge (P) can be decreased, so that the number of responses to the attention challenge can be appropriately adjusted. If then, there will be no problem. However, if P is excessively reduced, the expected loss (-P*A), which can occur if the attention challenge is not properly responded, will also be lowered. This will soon lead to a situation where the attention challenge would not be able to force the verification strategy to always be the dominant strategy. It is still possible to offset this by raising A in this case as well, but it is difficult to regard it as a desirable solution because it implies raising capital requirements. In short, increasing the number of target verifiers means that a larger number of on-chain transactions would be made due to the attention challenge.

The second issue is that expanding the number of targets of the attention challenge can undermine the rollup’s user experience. The advantage of rollup is that it can operate safely without all users being verifiers, i.e. with at least one honest verifier. Obviously, the more active verifiers the more secure rollup is, but this doesn’t mean that all users should be active verifiers. To put it in the extreme, making everyone verify the rollup is just as bad as making no one verify the rollup. This has already been discussed sufficiently when discussing the limitations of Plasma in the Ethereum community, so further explanation will be omitted.

In other words, we shouldn’t apply the attention challenge to every user. Most desirable method is to wisely apply the attention challenge to the appropriate targets, tailored to the level of security each rollup wants to achieve. If fast withdrawals and cross-rollup transactions are actively made, the attention challenge may not need to be applied(assuming that the proportion of intermediaries who choose strategic safety is sufficient). However, there could only be a few fast withdrawals and cross-rollup transactions in the early stage of a rollup, and the number of active verifiers might be too low. In such cases, the attention challenge can be of great help in keeping the rollup secure.

Additional Reward to Attention Challenge

The attention challenge is a very effective way to maximize security, but it imposes verification as a punishment, a sort of negative feedback to the verifier. The verifier chooses the verification strategy because of the attention challenge, but not only there is no additional reward for this verification, further costs are also incurred during the process. Thus, all verifiers will consider the application of the attention challenge from a negative perspective.

However, if additional rewards could be paid to verifiers who are targeted for the attention challenge, individual verifiers would be able to mitigate the economic costs incurred by the attention challenge, or even earn more profits from it. For example, suppose there are tokens issued for building an ecosystem of a rollup, some of these tokens could be given as rewards to verifiers participating in the attention challenge. If the amount of rewards is higher than the cost of responding to the attention challenge, verifiers will consider the challenge as a blessing instead of a burden.

We can collect additional fees from the users to pay the verifier as rewards. However, we will have to carefully adjust the amount of the additional fee so that it wouldn’t be a burden to the users.

Repeated Game

The verification games of Ivan and the sequencer so far were all one-shot games. However, many economic activities in the real world are repeated with interactions between the same parties. Likewise, the verification game in the world of rollup is not a one-shot game. Ivan and the sequencer will repeatedly play this game over and over several times. They will make strategic choices for the current stage of the repetitive game, depending on the history of the outcome of their actions before. In this respect, it is necessary to analyze the verification game of Ivan and the sequencer from the point of view of the repetitive game, not simply the one-shot game.

First, it is well known that in an infinitely repeated game, the equilibrium can be different from one-shot game, or it can even have more than a single equilibrium. This is because it is difficult to deviate from a cooperative strategy, unlike one-shot games, or finitely repeated games. Players in infinitely repeated games usually choose the cooperative strategy in order not to break relationships with other players because the future loss that would occur from revenge attacks from the opponent could be higher than the profit received by deceiving the opponent in the present.

However, in finitely repeated games, it is challenging to make players choose the cooperative strategy no matter how often the number of repeated games increases. Players in the finitely repeated games with a unique Nash equilibrium choose the same equilibrium of the one-shot game at each stage regardless of how many games are repeated.

What we should pay attention here is how to categorize this game of Ivan and the sequencer. Before discussing whether the number of repeated games will be infinite or finite, let’s focus on the following question first.

If the sequencer on one rollup did not execute transactions correctly, what would happen to the sequencer and Ivan (or the other verifiers)?

First, let’s assume that the attack of the sequencer is verified within the DTD. In this case, the sequencer will lose all or a portion of its deposit and any right they might possess to manage that rollup. Ivan will receive a portion of the sequencer’s deposit as a reward. What will happen to the verification game after that? It ends!

For our next scenario, let’s assume that the attack was “found” after the DTD. Ivan, who has already sent tokens to the sequencer through the fast withdrawal, checks the tokens that the sequencer gave him in L2 after the DTD, but will soon come to know that they are invalid tokens. At this point, what happens to the sequencer and Ivan? Ivan has already lost his tokens from the fast withdrawal which has not been verified within the DTD, so he will not be able to receive any compensation. What about the sequencer? Although it may differ according to the predetermined rules of each rollup, at least the role of the sequencer will be suspended afterwards, even if the deposit cannot be slashed, and the invalid states cannot be changed to valid ones. In other words, even in this case, the subsequent verification game ends.

If you, as a store owner, discovered after a long time that the clerk had embezzled, would you let the clerk continue to run the store afterwards? If you weren’t an angel, definitely you would fire the clerk even if a long time has passed since, and you cannot be held financially responsible for theft.

From the two scenarios above, we can observe that the moment the sequencer attacks, the repeated games end regardless of whether it is verified or not. Through this, we can see that it is more reasonable to assume that the game will be repeated finitely, although the number of repetitions cannot be accurately known in advance.

Therefore, it is more rational to define the game of Ivan and the sequencer as a finitely repeated game rather than an infinitely repeated game, and thus it can be seen that the equilibrium in a one-shot game will be repeated at every stage.

Related Research

As discussed before, the verifier’s dilemma is not a new issue, and it has already been discussed in various areas. Many researchers have already tried to find a solution to it.

In this chapter, we will examine various solutions proposed to solve the verifier’s dilemma on many different platforms.

Forced Error & Jackpot — Truebit

In order to understand how the forced error and jackpot concept work, we must grasp a solid concept of what Truebit is. Truebit is a type of scalability solution that uses off-chain computations. In Truebit, large computation tasks that need an Ethereum block gas limit even higher than current block gas limit, can be executed correctly off chain.

The participants of the Truebit protocol are largely classified into three types, and the roles and responsibilities that each participant are as follows.

1. Task Giver

  • Requests the solver to execute a complex transaction.
  • For requesting a task, the task giver must pay a fee to the solver.

2. Solver

  • Executes a requested task (transaction) off-chain.
  • Deposit a certain amount of tokens for acting as a/the solver.
  • If it is proved that the execution was invalid, the deposit is slashed.

3. Verifier

  • Verifies that the task has been executed correctly.
  • If the task was not executed correctly, the verifier deposits a certain amount of tokens and requests to play a verification game to prove it.
  • If the verifier wins the verification game, the solver’s deposit is given as a prize to the verifier.

What you can see here is that the role of the verifier is crucial for the Truebit protocol to perform safely. This is because, if the verifier does not properly verify that the task has been executed correctly, we cannot ensure that the solver will always execute tasks correctly, and submit only valid results on-chain. Therefore, what we should focus on is whether the verifier in Truebit is always willing to verify or not.

Those who can read between the lines would assume that the payoff of the verifier in the Truebit protocol would have a similar structure to that of the super-simple game discussed above: this assumption is accurate.

If the probability of the solver submitting an invalid result is X, then the expected payoff of the verifier in Truebit can be expressed as:

  1. Expected payoff of verification: R*X-C
  2. Expected payoff of non-verification: 0 (The verifier has no explicit penalty even if he does not verify the solver.)

It is now very obvious that in such a model, it is difficult to grant the incentives for verification. Certainly, the architects of the Truebit protocol were also aware of this. To solve this problem, they introduced Forced Error and Jackpot method.

The biggest cause of the verifier’s dilemma in Truebit is that the verifier’s reward also depends on the attack probability of the solver. It is the forced error that solves this. The forced error refers to an error that is deliberately, and randomly generated by the system regardless of whether or not the solver attacked. i.e. even if the solver does not attack, a forced error occurs randomly, so the verifier can verify and get rewarded.

Since the deposit of the solver cannot be used to give rewards for the forced errors, it is necessary to make a new reward pool called the jackpot. Per each request, task givers pay a separate tax in addition to the fee paid to the solver, and this tax is gathered in the jackpot reward pool.

In short, to collect funds for the jackpot, an additional fee is received from the user, and the reward is distributed to the verifier who successfully verified the forced error that is generated randomly.

Jackpot rewards are given to any verifier who verifies the forced error within a set period of time. However, because of the sybil attack, which exploits jackpot rewards by creating a large number of duplicate verifier accounts, Truebit has designed a technique to reduce the distribution of rewards exponentially in proportion to the number of accounts that have verified the forced error. The exact jackpot reward distribution is calculated as follows.

  • J/2^(k-1) (k = Number of verifier accounts who verified forced error)

If the probability of a forced error is P, when P*J/2^(k-1) > C, all verifiers always have an incentive to verify. Since the values of P and J are system variables that can be determined by the protocol, unlike the X, Truebit can easily make verifiers verify the solvers at any time.

The jackpot model has the above advantages, but on the contrary, the following issues also exist.

  1. Unpredictable rewards
  2. Charges additional fees to users

The first issue is that the events of forced errors can only be predicted probabilistically. If you are lucky, you might be rewarded with only one verification, or you might not even be able to receive any reward after hundreds of verifications. The expected reward is predictable, but since the jackpot is like a lottery, there is an inadequate aspect to attract the constant participation of verifiers. For more information on this, please check the article written by Decon.

Secondly, in order for the jackpot model to operate as intended, it is essential to collect funds for the jackpot reward pool above all else. If the reward pool isn’t large enough, it won’t be possible to provide proper incentives to verifiers. However, since the whole jackpot reward is borne by the task giver, it means that in order to increase the size of the reward, the task giver must inevitably pay an additional fee.

Application to Rollup

The forced error and jackpot model has several issues, but it is still valuable in that it can increase the incentive for verification by setting the expected reward for verification higher than the verification cost.

However, it is not easy to use this to solve the verifier’s dilemma in the rollup because it is not easy to collect additional fees, and the rollup is not suitable for generating the forced error.

First, collecting additional fees from users could be burdensome to them, and will undermine the UX. As an alternative to this, native tokens issued at the ecosystem level of the rollup can be used for the fund. However, not all rollups can issue these tokens, and since jackpot rewards are completely dependent on the value of the tokens issued in the ecosystem, there may be additional risks in terms of security.

Secondly, the rollup has a very tricky structure to generate forced errors. In Truebit, the solver has to submit both correct and incorrect solutions to L1. If the forced error needs to be generated, the incorrect solution is designated by the solver, and on the other-hand, if there is no need for a forced error, the correct solution is designated. In Truebit, since the tasks requested by givers are independent and unique tasks, it is not a problem to generate forced errors this way.

In the case of rollup, if we generate forced error, subsequent transactions must be executed based on the incorrect states. Otherwise, anyone can easily detect that a forced error has occurred. In addition, since the subsequent transactions are executed based on invalid states, all consequent state values will also become invalid.

Multiple Solvers — Truebit

Multiple solvers is a newly proposed method to solve several shortcomings of the jackpot model, and to completely solve the verifier’s dilemma in Truebit. There are three types of participants in the jackpot model; task giver, solver, and verifier, but in the multiple solvers model, there are only the task giver and the solver. It has multiple solvers to prevent any attacks by a small number of malicious solvers.

In the multiple solvers model, when a task giver asks for a task, a number of solvers randomly selected from the solver pool are assigned to the task. Each solver submits a solution, and a proof of independent execution for the task. Proof of independent execution is the process of proving that each solver has solved the task on its own. How the proof of independent execution generated is described here. This allows each solver to prove that they have not replicated the solutions of other solvers.

If all solvers assigned to a task do not submit the correct solution, or the proof of independent execution, they can be challenged by another solver, or an external verifier (anyone can still verify even if they are not solvers assigned to the task) and their deposit can be slashed.

However, if each solver executes the task correctly, they can receive a fee as a reward, and if this fee is higher than the verification cost, it can be said that solvers always have an incentive to execute the task correctly. In other words, the architects of Truebit found that the verification and the process of solving are not inherently different, and resolved the verifier’s dilemma by giving explicit rewards for them.

However, having multiple solvers results in one fatal drawback: it is too expensive. A higher level of security can be guaranteed as the number of solvers assigned to each task increases but it also means that the fees paid to the solvers will inevitably increase.

Therefore, Truebit allows task givers to pay fees according to the desired level of security. If you require a higher level of security, you can pay a higher fee to get more solvers for the task. Otherwise you can pay a minimal fee to get a few solvers for the task.

Application to Rollup

The core idea of the multiple solvers is not to separate the solver and the verifier, but to integrate them into one, and to directly reward the action of solving (verifying) the task. To apply a similar method in the rollup, we have to appoint multiple sequencers instead of one, execute transactions independently, and submit the corresponding state root ​​to the layer 1.

The important point is that in order to keep the rollup as secure as possible, it is necessary to reward the multiple sequencers appropriately. This reward can be accumulated largely by collecting additional fees from users, or by issuing native tokens of the ecosystem to the reward pool. However, for a higher level of security, more fees and token rewards are required, so it is necessary to specifically establish an appropriate level of security and costs.

Additionally, in the multiple solvers model, transactions on layer 1 for submitting state roots, ​​and the proof of independent execution could be increased linearly as the number of sequencers grows. Note that this may conflict with the main purpose of the rollup, which is to maximize the scalability of layer 1.

Attention Challenge — Arbitrum

Attention Challenge is an approach to resolve the verifier’s dilemma by punishing verifiers who do not properly perform verification work. The key to the attention challenge is not to encourage verification by giving additional rewards to the verifier, but to make the task of verification inevitable and inflict significant losses on the verifier if the verification is not performed.

Let’s refer to the Super-simple model we covered earlier to see in detail how the attention challenge works. In this simple model, there are two participants, the asserter and the verifier. The task of the asserter is to compute a certain function f(x), and submit x, and only the encrypted value of f(x). The verifier’s duty is to check x and submit the correct f(x). If the verifier submits incorrect values ​​or does not respond in time, deposit of the verifier will be slashed.

For responding to the attention challenge, verifiers must submit transactions on-chain, and this would be a heavy burden on both the layer 1 and the verifier. So, the attention challenge is not consistently applied to each verifier, but only probabilistically. However, it is designed in a way that the verifiers can only check whether they are the target of the attention challenge only after performing the verification work by themselves. Therefore, verifiers always have to do verification to check if they are selected as respondents to the attention challenge.

For technical details on how f(x) is encrypted, and how each verifier checks whether they are selected as respondent to the attention challenge, please check this article.

When applying the attention challenge to the super-simple model, the expected verification and non-verification payoff of the verifier are changed as follows.

  1. Expected payoff for verification: R*X-C
  2. Expected payoff for non-verification: -L*X -P*A

P is the probability of being designated as the target of the attention challenge, and A is the amount of tokens slashed when an incorrect response is submitted.

If P*A > C, the best strategy for the verifier is to always verify regardless of X. Since values of P and A can be controlled by the system, it can easily force the verifiers to always verify.

Attention challenges are very attractive, in that they do not incur additional costs for the user, and can make the verifier always perform the verification. However, there is one blind spot in the attention challenge: if the verifier verifies, but the assertor does not perform any attacks, then in all cases, there is only loss for the verifier.

For example, under the conditions below, the expected payoff of the verifier can be expressed as follows.

  • R = 100
  • L = 10
  • X = 0
  • C = 1
  • P*A = 5
  1. Expected payoff for verification = -1
  2. Expected payoff for non-verification = -5

In this case, the verifier will perform the verification because the expected payoff to verify (-1) is greater than the expected payoff not to verify (-5). However, there is one trap. Whatever choice is made, the verifier faces pure loss! In other words, the verifier is in a situation where he suffers a small loss in order to avoid a larger loss. We cannot systematically enforce the verifier to be in such a situation without an alternative. If the verifier wishes, he should have the freedom to abandon the verifier’s role, withdraw the deposit, and leave the system.

If you were this verifier, what would have been your choice? Would you like to remain in this system and continue verification, awaiting unclear returns, or just give up and leave because there is no reward and only costs for you anyway? It is certain that everyone will choose the latter.

What we can observe from this example is that attention challenge can lead the verifier to always choose a verification strategy in the short run, but it does not convince the verifier to continue to remain in the system in the long run.

So, is the attention challenge of no use? Not quite. The attention challenge is very effective in solving the verifier’s dilemma. However, it is effective only when the verifiers can earn a certain amount of profits related to, or not related to the verification work.

For example, if the verifier earns a constant return F for each verification unit, when F-C > -P*A, it is difficult for the verifier to find a reason not to continue the verification. We know one model in which these conditions can be applied. It is the optimistic rollup!

Application to Rollup

In the rollup, the role of the verifier is not clearly distinguished, but any user can be the verifier at their will. Some of these users earn a certain profit for each transaction, which is called the verification unit. For example, a user doing yield farming on a rollup, or a developer running a DEX can earn a certain amount of revenue each time a transaction is executed. In these cases, applying the attention challenge can make these users reliably verify the rollup. Details on this matter have already been dealt with previously, so additional explanations will be omitted.

Why do I care so much about Verifier’s Dilemma?

This article continues to argue that in order to ensure security for all users of the rollup, it is necessary to consider the verifier’s dilemma, and to come up with appropriate solutions. Several reasons for this have been presented above, but there may still be disagreements from the community. This is only a theoretical aspect, and indeed, when rollups are actively adopted and used, the verifier’s dilemma may not be a big problem. There are altruistic people in the world who help others even if they don’t receive monetary incentives. There are also people who choose to give up a little bit of their profit for others rather than maximizing their profit. A similar situation could happen in the rollup too.

What I would like to suggest in this article is not the fact that the rollup has to be at risk if it doesn’t take the verifier’s dilemma into account. However, if it is not considered in advance, even if it rarely happens, in the event of an unexpected crisis, the security of the rollup “could be at huge risk”.

(Source: Shutterstock)

Let’s take our focus away from the rollup for a moment. For example, say we are designing a car for citizens living in a modern city. Since this car will probably only run on well-maintained urban roads, we don’t have to struggle to make it work well in extreme environments. It doesn’t have to be a four-wheel drive, and there’s no need to add expensive shock-absorbing suspensions, as it most certainly won’t fall from high drops. It won’t have to run through the desert, or heavy snow and rains, so we don’t have to consider sand dust, pay much attention to heavy snow and heavy rain conditions. The roads will always be well maintained, so even an ordinary car can run well.

On the other hand, if we want to design a car that can always run well without any problems in any kind of extreme environment, the story is completely different. If there are not enough safety mechanisms, the lives of the driver and passengers can be seriously threatened in such environments. For this reason, even if the design becomes complex and additional costs are incurred, designing the car to be as safe as possible is a top priority. A car designed in this way, of course, can also be driven in an urban environment perfectly but the various costs and equipment involved in this car may feel a bit unnecessary and excessive for the customers/users. However, this car will do its best when faced with extreme conditions. When there is a sudden heavy snowfall, and all the other cars can’t move, this car will smoothly drive forward without any problems.

Coming back to the topic of rollup and blockchain, which of these two cars do you think is close to rollup? I trust anyone who believes in blockchain will agree that rollup, along with all blockchain technologies, are closer to the latter in nature. Blockchain technology seeks to overcome all extreme environments that the system may face (fault tolerance, attack resistance, collusion resistance) through decentralization as rare as they might be. Rollup is a technology that implements the ability to cope with extreme situations similar to those of Layer 1, and simultaneously increases the insufficient scalability of Layer 1 significantly. Therefore, we need to work hard to ensure that the rollup can cope well with all extreme environments.

Conclusion

Optimistic rollup is a layer 2 solution that is expected to be able to drastically solve the current scalability issue of Ethereum in the short, medium and even long term. Many types of layer 2 solutions have been proposed to date, and in particular, zk-rollup, powered by validity proof, is in the spotlight since it can provide instant finality, unlike optimistic rollup. However, there are obvious limitations at this point in processing all complex transactions in Ethereum with zk-rollup, so for the time being, the main realistic alternative for Ethereum will be the optimistic rollup.

In the near future, the optimistic rollup will be widely used and adopted massively. As the number and the size of rollups increase, the security issue will become more and more important. As we continue to emphasize, rollup is a very attractive Layer 2 solution in that the system can be kept secure with just one honest verifier.

However, as we discussed throughout this article, in rollup, verifiers have a higher incentive to not verify if the attack probability is fairly low. In other words, as the need for the rollup to operate safely increases with time, the number of active verifiers will decrease rapidly. Intermediaries of fast withdrawal and cross-rollup transfer are no exception. In short, the rollup will, paradoxically, become more vulnerable as it gets safer. But we know one solution; the proper application of the attention challenge. We can ensure the security of the rollup via the attention challenge.

Certainly, this article does not insist that the attention challenge must be applied to all rollups. There will be a different target level of security for each rollup, and there will be assumptions about the disposition of the participants. If it is not necessary to guarantee high security, or if the proportion of altruist and extremely risk-averse participants is considered to be high enough, it is not necessary to apply the attention challenge. The important concept here is using only the necessary and appropriate tools for each purpose. I hope this article will guide you even a little when designing the security of each rollup.

Glossary

  • Sequencer: Determines the order of transactions in the optimistic rollup, and submits state roots ​​and transaction data to layer 1 after executing those transactions on layer 2. Also called as Operator.
  • Verification: In optimistic rollup, refers to verifying that the state root submitted by the sequencer is valid or not.
  • Verifier: An entity who performs the verification. In the optimistic rollup, any participant can be a verifier.
  • DTD (Dispute Time Delay): Refers to the period of time required for a state root or an execution result to be finalized in the fraud proof system.
  • Fraud Proof: Refers to a set of systems which assumes that the result of execution is correct first, and then provides a way to verify that the result value is correct within a certain period of time (DTD).
  • Validity Proof: Refers to a system which proves that the result of execution is correctly configured. No DTD is required for validity proof.
  • Attack: Usually means taking an economic advantage by submitting an incorrect state root, or execution result to layer 1. In optimistic rollup, the sequencer can take over assets from users by inducing finalizations of incorrect state roots.
  • Challenge: Refers to a set of steps to change an incorrect state root or execution result to a correct one via fraud proof. In the optimistic rollup, anyone can apply for a challenge to prove that the state root submitted by the sequencer is invalid.

References

--

--