Circumventing Layer Zero: Why Isolated Security is No Security

Krzysztof Urbański
Published in
10 min readJan 5


From the beginning of L2BEAT, we put a lot of effort into analyzing and understanding the risks associated with L2 protocols. We do our best to be an unbiased, independent watchdog, acting in the best interest of users and the ecosystem. We do not let our personal preferences for the project or the team involved get in the way. That’s why it’s common that we need to turn on red alerts or point out our concerns in various protocols, even though we value the time and work put by the specific teams into their projects. Having security-related discussions early on allows the whole ecosystem to better prepare for potential risks and react earlier to any suspicious behavior.

Today we’d like to open up a debate on shared security models of cross-chain applications. Currently, there are two approaches: shared security vs. per-application security. The first one, shared security, is utilized, for example, by all rollups. The second one, the per-application security, is used by “omnichain” projects. The prime example of such a project is LayerZero.

Shared security vs. Isolated security

By shared security, we mean that specific tokens or apps running on a given infrastructure do not freely choose their security model. Instead, they have to obey whatever security requirements the infrastructure imposes. For example, optimistic rollups usually impose a 7-day finality window — apps running on such rollups cannot simply ignore or shorten this period. It may seem like an obstacle, but it’s an obstacle put in place for a reason. It allows to provide users a safety guarantee that they can expect to be held by whichever app they’re using on that rollup, no matter what the apps’ internal security policy is. The app might only strengthen the rollups’ policy, not weaken it.

By isolated security, we mean that every app is responsible for defining its security, not being restricted by the infrastructure in any way. At first, it may seem like a good idea. After all, the app developers know best what security measures the app may need. But at the same time, it transfers responsibility for assessing risks associated with every app’s security policy to the end user. Furthermore, if the app developers are free to choose their app policy, they may also choose to change it anytime they want. So it’s not enough to assess the risks once for every app, they should be assessed every time the apps’ policy changes.

The issue

We think that the isolated security model where each app can freely define its’ security policy poses serious security concerns. First of all, it increases risks for the end users, as they have to separately validate the risks inclined with every app they intend to use.

It also increases the risk for the apps using such a model. Isolated security adds additional risk regarding security policy change — if the attacker gets to change the security model for the application, it might as well simply disable it, providing the possibility to drain the funds or misuse it in any other way. There is no additional security layer on top of the application that would guard against misuse.

Furthermore, with security policies able to change instantly at any time it becomes practically impossible to monitor apps on a daily basis and inform users about the risks.

We find it similar to the upgradeability of smart contracts. We already warn against it at L2BEAT. We inform users about rollups and bridges that have upgradeability mechanisms in their smart contracts, as well as the exact mechanism governing the upgradeability in each case. This is already pretty complex and with an isolated security model, this multiplies for every app, making it almost impossible to track effectively.

That’s why we consider an isolated security model as a security risk in itself, and we postulate to treat every app using such a model as risky by default until proven otherwise.

The plan

We decided to test our assumptions in the real world, on the mainnet. LayerZero framework was chosen for the experiment because it is one of the most popular solutions using isolated security at its core. We deployed an omnichain token that was safe and later the security configuration was updated which allowed for malicious tokens withdrawal. The code of the token is based on the examples provided by LayerZero and is very similar or identical to many other omnichain tokens and apps deployed in production.

But before we dive deep into the details, let’s have a brief look into what the LayerZero security model looks like.

As LayerZero’s whitepaper clearly states, its’ “trustless inter-chain communication” relies on two independent actors (the oracle and the relayer) acting together to ensure the safety of the protocol.

As LayerZero states on its website, its core concept is that it is a “user application configurable on-chain endpoint that runs a ULN (UltraLightNode).” LayerZero’s’ on-chain components rely on two external off-chain parties to relay messages between chains — the Oracle and the Relayer.

Whenever any message M is sent from chain A to chain B, the following two actions take place:

  • first, the Oracle waits until the transaction sending the message M on chain A gets finalized and then writes on chain B the commitment for the message bundle, for example, the hash of the block header (exact format can vary between different chains/oracles) at chain A containing that message M
  • then the Relayer sends to chain B a “proof” (for example a Merkle Proof) that the stored header contains the message M

LayerZero makes a strong assumption that Relayer and Oracle are independent, honest actors. Even in their whitepaper, we can read that if that assumption is not met, the Relayer and Oracle can collude, resulting in a scenario where “The block header provided by the Oracle and the transaction proof provided by the Relayer are both invalid, but still match”.

LayerZero claims that “LayerZero’s design eliminates the possibility of collusion”. But in fact, that statement is not true (which we prove in the experiment showcased below), as each user application can define its own Relayer and Oracle. LayerZero does not guarantee by design that those components are independent and that they cannot collude. It’s up to the user application to provide those guarantees. And if the application chooses to break them, there’s nothing in the LayerZero mechanics that can stop it from doing so.

Moreover, by default all user applications are able to change the Relayer and Oracle at any time, completely redefining security assumptions. So it’s not enough to check the security of the given app once, as it might have changed anytime after the check, as we will show in our experiment.

The experiment

In our experiment, we decided to create a simple omnichain token, CarpetMoon, working both on Ethereum and Optimism, using LayerZero to communicate between both chains.

Our token initially uses the default security model provided by LayerZero, so it looks quite the same as most (if not all) currently deployed LayerZero applications. Thus, it is generally as secure as any other token using LayerZero.

First, we deploy our token contracts both on Ethereum and on Optimism:

And we set up the routing so that LayerZero knows which contract corresponds to which one on both chains:

So the token is set up, it looks exactly like all other omnichain tokens using LayerZero, with default configuration, nothing suspicious.

We provide our test user, let’s call her Alice, with test tokens, so Alice has 1B CarpetMoon tokens on Ethereum:

Now Alice bridges those tokens to Optimism using LayerZero.

We lock the tokens in an escrow on Ethereum:

The message with the transaction is being delivered to Optimism through LayerZero:

And bridged tokens are being minted on Optimism, Alice now has 1B MoonCarpet tokens on Optimism:

Ok, so everything worked as expected, Alice bridged her tokens and saw that there are 1B MoonCarpet tokens in the escrow on Ethereum and 1B of MoonCarpet tokens on her account at Optimism. But to make sure that everything works correctly she transfers back half of the tokens (500M MoonCarpet) back to Ethereum.

So we start with the transaction burning 500M tokens on Optimism:

Information about that transaction gets passed to Ethereum:

And, as expected, 500M MoonCarpet tokens get delivered back to Alice's address from the escrow:

Up until now, everything works fine, exactly as assumed. Alice had checked that she can transfer tokens from Ethereum to Optimism and back again, she has no reason to be afraid about her MoonCarpet tokens.

But let’s say that something goes wrong — for example, the team behind our token gets compromised, and the bad actor Bob gains access to the LayerZero config for our app.

With such access, Bob can change the Oracle and Relayer from default to the ones under his control.

Please keep in mind that this is a mechanism provided to every app using LayerZero, ingrained in LayerZero’s architecture, it’s not any kind of backdoor but rather a standard mechanism.

So Bob changes the Oracle to an EOA under his control:

And does the same with the Relayer:

And now strange things happen. With Oracle and Relayer now under Bob’s full control, he is able to steal Alice's tokens. Even though no action takes place on the Optimism (the MoonCarpet tokens are still in Alice's wallet there) Bob is able to convince MoonCarpet smart contract on Ethereum (using LayerZero mechanisms) that he burned tokens at other chain and he is able to withdraw MoonCarpet tokens on Ethereum.

First, he updates the blockhash at Ethereum using rogue Oracle:

And now he can withdraw the remaining tokens from the escrow:

The outcome

Alice won’t even know why and when something wrong had happened. Suddenly her MoonCarpet tokens at Optimism are no longer backed by tokens on Ethereum.

The smart contracts are non-upgradeable and are acting as intended. The only suspicious activity is the change of Oracle and Relayer, but this is a regular mechanism built-in into LayerZero, so Alice cannot even know whether this change was intentional or not. And even if Alice learned about that change it would already be too late — the attacker is able to drain the funds before she can even react.

And LayerZero couldn’t help here as well — these were all valid executions of their mechanisms, which they can’t control anymore. Theoretically, the application itself can block itself from changing the Oracle and Relayer, but as far as we know none of the already deployed applications did it.

We have done this experiment to check if anybody notices it, but as we expected, nobody did. It’s practically impossible to effectively monitor all the applications built with LayerZero to check if their security policy hasn’t changed and to warn the users if that happens.

Even if one were able to catch up that Oracle and Relayer had changed in a way that poses security risks, when it happens it’s already too late. As the new Oracle and Relayer may now freely choose to censor or simply disable communication between chains, users usually cannot do anything about it. This is clearly shown in our experiment, as even if Alice notices the change in the application config, she can’t do much with her bridged tokens — the new Oracle and Relayer no longer listen on the original chain so they don’t relay the messages back to Ethereum.

Conclusions and CTA

As we could see above, even though our token was built using LayerZero and used its mechanics as intended, we were able to steal funds from the tokens’ escrow. Of course, it was a fault of the application (CarpetMoon token in our case) and not the LayerZero itself, but that proves that LayerZero by itself does not provide any security guarantees.

When LayerZero describes their security model regarding Oracle and Relayer, they assume that app owners (or someone in possession of their private keys) won’t do anything irrational. But that assumption is incorrect in an adversarial environment. Moreover, it requires the users to trust the application owners as a trusted third party.

In practice, as a result of this, one can not make any assumptions about the security of the applications built using LayerZero — each app should be considered risky until proven otherwise.

Actually, the whole story started for us with a PR with which we planned to include all omnichain tokens on the L2BEAT site — we had a hard time figuring out how to assess their risks. While analyzing the risk vectors we came up with the idea for our experiment.

For L2BEAT the consequences are that we have to put alerts on top of every app built using LayerZero, warning about the possible security risks. But we would like to open a broader discussion about security models, as we believe that isolated security is an anti-pattern that should be avoided, especially in our space.

We are confident that as isolated security models such as in LayerZero become more and more popular, there will be more and more projects abusing them, causing a lot of damage and raising uncertainty about the whole industry.