Beyond State Channels

Ed Felten
Offchain Labs
Published in
8 min readAug 29, 2019

State channels are one of the most popular approaches to scaling smart contracts. By moving contract activity off the main blockchain, state channels allow contracts to do more at lower cost. But state channels have some well known limitations. In this post, I’ll review those limitations and talk about how to overcome them.

To make things concrete, let’s consider an example of a two-person smart contract. We’ll imagine that Alice and Bob are playing a strategy game, with the contract acting as game master and referee. At the end the contract will declare who won and give the winner a gold star.

The key observation behind state channels is that as long as some basic guidelines are enforced, only Alice and Bob need to concern themselves with the operation of this contract. So why not let them work things out themselves, without having to put everything on the main chain? In particular, any change in the contract’s state that is signed by both Alice and Bob will be assumed correct. Alice and Bob are the contract’s participants (sometimes called validators). In general, any behavior of the contract that all of the participants agree on will be accepted as correct. This makes the contract “trustless” with respect to the participants, because each participant acting alone can prevent incorrect behavior of the contract.

Any decent state channel system can execute the whole contract offchain, in the optimistic case where Alice and Bob are online and cooperating the whole time. At each step of the game, they will jointly sign an assertion of the contract’s state, and nothing will need to be written to the main chain until the very end. Alice and Bob will jointly sign a series of assertions, with each one superseding the one before. At the end, Alice will want her victory and gold star to be recorded on-chain for everyone to see — so she will post on-chain the final jointly signed statement declaring her the winner.

The Non-cooperative Case

Where things get trickier, and where different state channel systems take different approaches, is when Bob stops cooperating. Maybe his computer crashed or he lost network connectivity. Maybe there is a bug in his software. Or maybe he just doesn’t like how the game is going and he wants to renege on his promise to play the game fairly and give Alice a gold star if she wins. In all of these cases, Bob will not sign to endorse the next state of the contract. Now what?

Approach 1: Move on-chain: One approach is to move the entire contract onto the main chain. The last jointly signed state of the contract is posted to chain, and the game resumes from there, executing as an on-chain contract. That will work fine, but only if the game is simple enough to manage on-chain at reasonable cost. On Ethereum and similar systems, the main chain has very limited capacity, so many games or other contracts can’t be run on-chain at all, or can only run at an unacceptable cost. So this approach will work only for some contracts.

Approach 2: Cash out on-chain: Another approach is to have the main chain terminate the contract (ending Alice and Bob’s game in our example) and “cash out” the participants with a fair distribution of the contract’s funds. Again, this will work reasonably well for some contracts, such as payment channel contracts that continually keep track of a cash-out value within the contract.

But this will be problematic for Alice and Bob’s strategy game. Suppose they’re deep into the game and Bob fails to sign an update, so the on-chain cash-out algorithm is triggered. Alice’s fair cash-out value will depend on the probability that Alice will win the game, starting at the current game state. That’s very hard to determine! Figuring out Alice’s probability of victory will almost certainly be more difficult than simply refereeing the rest of the game — and if on-chain refereeing is impractical, then determining an accurate cash-out value will be too.

This is where things get tricky. If there is a cash-out algorithm that only approximates the true value of the current game state — and keep in mind that that algorithm is public and not too complicated because it’s in an on-chain contract — then at each stage of the game Bob can calculate what his current cash-out value would be, and if that is better than the current game situation as he sees it, then his incentive is to withhold signature and get the result of the on-chain cash-out.

If this is done, with nothing else changed, then both players, if they’re rational, will always be comparing their cash-out value to their current game situation to see if they can gain an advantage by strategically forcing a cash-out. Essentially, they’ll be playing a slightly different strategy game, where in addition to the moves of the original game, there is also a “cash-out move” that is available to all players at every point. If the cash-out algorithm is only approximate, it’s likely that many games will end with a voluntary cash-out triggered strategically by one of the players. How can this be prevented?

Approach 3: Add a noncompliance penalty: The obvious fix is to penalize a player for non-compliance, that is, for being the one who forces the game back to chain. If this penalty is large enough, and the cash-out algorithm is close enough to accurate, then both players will always prefer to keep the game going rather than strategically forcing a cash-out. Of course, you can’t make this penalty too big, because an innocent player could be penalized if their machine crashes or their network disconnects — remember that the algorithm can’t distinguish these cases from a strategic choice not to respond.

Even this is not cost-free in terms of on-chain computation. You can only punish somebody for not signing a statement if that statement is true — so before exacting a noncompliance penalty you’ll have to verify, by on-chain computation, that the request they didn’t sign was correct. (You’d better hope that checking that is cheap enough to do on-chain.) A bigger problem is that if Alice claims that Bob refused to sign, but Bob says he did sign, there is no way for an on-chain contract to tell which one of them is lying — and you can’t punish Bob for failing to sign if you can’t tell if he did in fact sign. The only thing that will work on-chain is for Alice to make an on-chain demand for Bob’s signature, and then to allow some time to elapse during which Bob must produce a signature on-chain. Note that this doesn’t determine whether Alice’s accusation in the original offchain setting was right— it just forces Bob to sign.

State channels are hard. Here’s the bottom line: Unless you’re in one of the happy cases, where (1) your contract can run comfortably on-chain or (2) there is a simple, clear, and very accurate cash-out procedure, it will be challenging to use state channels. You either need to redesign your algorithm and code to fit into one of those manageable cases…or you can use a system that improves on state channels.

How to do better

We designed Arbitrum, our Layer 2 scalability solution, with state channel style dapps as one major use case. We call this use case an Arbitrum Channel. We wanted dapps with a fixed set of participants to run offchain in the common case, and trustlessly in every case, without so many of the pain points of state channels.

No special programming: We didn’t want to make developers rewrite their code. There’s no need to write a special cash-out algorithm, no need to write custom dispute resolution code, and no need to rewrite a dapp in a special state machine form. Give normal Solidity code to the Arbitrum compiler, and it takes care of the rest.

Never move your contract’s code or storage to chain: The only thing that needs to happen on-chain is dispute resolution, and Arbitrum can resolve a dispute between Alice and Bob without needing to get the full code or storage. Arbitrum’s protocol will narrow down the dispute until the disagreement is about just a few tiny pieces of code or storage, and only those will need to be put on-chain to resolve the dispute.

Make progress even if one party is offline: We don’t want to have to stop everything if Bob goes offline for a short time due to a technical problem like a network interruption. In Arbitrum, if Bob isn’t signing, Alice can post an assertion online, claiming that the contract will execute a certain number of steps, ending in a state with a certain cryptographic hash. Alice stakes a deposit that she will lose if this is found to be false. Now Bob has a window of time to challenge Alice’s assertion, if he thinks it’s wrong. If the time window expires with no action from Bob, Alice’s assertion is accepted as correct. If Bob does challenge Alice’s assertion, Bob stakes a deposit with his challenge, and there is now a dispute.

Note that a dispute will never happen if both parties are rational. Alice doesn’t want to make a false assertion because that will put her stake at high risk; and Bob doesn’t want to falsely challenge a true assertion because that will cost him his stake. If Alice sees that Bob is offline, she might try posting an assertion in the hope that Bob won’t notice it in time to challenge. But if he does notice, Alice will lose a large stake. (And note that Bob might only be pretending to be offline, in the hope that Alice will try to post a false assertion, allowing Bob to take her stake! Given a large enough stake, Alice is foolish to try a false assertion even if it looks like Bob is offline.)

Resolve disputes efficiently: If there is a dispute between Alice and Bob, where they disagree about what the contract will do, we want to minimize the on-chain cost of figuring out who is right — then we’ll confiscate a stake from the one who was wrong. In Arbitrum, we resolve disputes in two stages. First, we use a bisection protocol to narrow a dispute about N steps of the contract’s computation to a dispute about N/2 steps, and we recursively bisect a logarithmic number of times until the dispute is about a single step of computation. (This kind of bisection has also been used by TrueBit and others.) Then one party offers a one-step proof to prove that they are right about the single step.

This process is refereed by an on-chain Ethereum contract that we call the EthBridge. All it has to do is to verify that bisection is done consistently, and then to check the one-step proof. And the Arbitrum virtual machine and bytecode are specially designed to make one-step proofs very small (a few hundred bytes) and very cheap to check (about 90,000 gas on Ethereum).

Once a dispute is resolved, computation can continue, mostly offchain as before.

Open Questions

There are several more questions that need to be answered to build a complete system. I won’t try to answer them here — this post is long enough already — but a working system needs to implement solutions. To see the details of what we’re building at Offchain Labs, you can check out the Arbitrum code.

One of those open questions is about incentives — how can we make sure that the parties who have the opportunity to check on others’ work actually do so? I’ll take that up soon in another post.

Ours isn’t the only approach, of course. There is surely more to be discovered. What are your ideas?

--

--

Ed Felten
Offchain Labs

Co-founder, Offchain Labs. Kahn Professor of Computer Science and Public Affairs at Princeton. Former Deputy U.S. CTO at White House.