Killing a Bad (Arbitrage) Bot

… to save its owners

Yannis Smaragdakis
Published in
5 min readFeb 10, 2021

--

by the Dedaub team

Following the previous white-hat hacks (1, 2), on contracts flagged by our analysis tools, today we’ll talk about another interesting contract. It’s hackable for about $80K, or rather its users are: the contract is just an enabler, having approvals from users and acting on their commands. However, a vulnerability in the enabler allows stealing all the users’ funds. (Of course, we have mitigated the vulnerability before posting the article.)

The vulnerable contract is a sophisticated arbitrage bot, with no source on Etherscan. Being an arbitrage bot, it’s not surprising that we were unable to identify either the contract owner/deployer or its users.

One may question whether we should have expended effort just to save an arbitrageur. However our mission is to secure the smart contract ecosystem — via our free contract-library service, research, consulting, and audits. Furthermore, arbitrage bots do have a legitimate function in the Ethereum space: the robustness of automated market makers (e.g., Uniswap) depends on the existence of bots. By having bots define a super-efficient trading market, price manipulators have no expected benefit from biasing a price: the bots will eat their profits. (Security guaranteed by the presence of relentless competition is an enormously cool element of the Ethereum ecosystem, in our book.)

Also, thankfully, this hack is a great learning opportunity. It showcases at least three interesting elements:

  • Lack of source code, or general security-by-obscurity, won’t save you for long in this space.
  • There is a rather surprising anti-pattern/bad smell in Solidity programming: the use of this.function(...) instead of just function(...).
  • It’s a lucky coincidence when an attack allows destroying the means of attack itself! In fact, it is the most benign mitigation possible, especially when trying to save someone who is trying to stay anonymous.

Following a Bad Smell

The enabler contract has no source code available. It is not even decompiled perfectly, with several low-level elements (e.g., use of memory) failing to be converted to high-level operations. Just as an example of the complexity, here is the key function for the attack and a crucial helper function (don’t pay too close attention yet — we’ll point you at specific lines later):

Key function decompiled. Unintelligible, right?

Faced with this kind of low-level complexity, one might be tempted to give up. However, there are many red flags. What we have in our hands is a publicly called function that performs absolutely no checks on who calls it. No msg.sender check, no checks to storage locations to establish the current state it’s called under, none of the common ways one would protect a sensitive piece of code.

And this code is not just sensitive, it is darn sensitive. It does a delegatecall (line 55) on an address that it gets from externally-supplied data (line 76)! Maybe this is worth a few hours of reverse engineering?

Vulnerable code in contracts is not rare, but most of these contracts are not used with real money. A query of token approvals and balances shows that this one is! There is a victim account that has approved the vulnerable enabler contract for all its USDT, all its WETH, and all its USDC.

Victim token approvals, including to the enabler (0x15cb5c845b…).

And how much exactly is the victim’s USDT, USDC, and WETH? Around $77K at the time of the snapshot below.

Victim’s balances.

Reverse Engineering

The above balances and suspicious code prompted us to do some manual reverse engineering. While also checking past transactions, the functionality of the vulnerable code was fairly easy to discern. At the end of our reverse-engineering session, here’s the massaged code that matters for the attack:

This function, 0xf080362c, or performArbitrageInternal as we chose to name it (since the hash has no publicly known reversal), is merely doing a series of trades, as instructed by its caller. Examining past transactions shows that the code is exploiting arbitrage opportunities.

Our enabler is an arbitrage bot and the victim account is the beneficiary of the arbitrage!

Since we did not fully reverse engineer the code, we cannot be sure what is the fatal flaw in the design. Did the programmers consider that the obscurity of bytecode-only deployment was enough protection? Did they make function 0xf080362c/performArbitrageInternal accidentally public? Is the attack prevented when this function is only called by others inside the contract?

We cannot be entirely sure, but we speculate that the function was accidentally made public. Reviewing the transactions that call 0xf080362c reveals that it is never called externally, only as an internal transaction from the contract to itself.

The function being unintentionally public is an excellent demonstration of a Solidity anti-pattern.

Whenever you see the code pattern this.function(...) in Solidity, you should double-check the code.

In most object-oriented languages, prepending this to a self-call is a good pattern. It just says that the programmer wants to be unambiguous as to the receiver object of the function call. In Solidity, however, a call of the form this.function() is an external call to one’s own functionality! The call starts an entirely new sub-transaction, suffers a significant gas penalty, etc. There are some legitimate reasons for this.function() calls, but nearly none when the function is defined locally and when it has side-effects.

Even worse, writing this.function() instead of just function() means that the function has to be public! It is not possible to call an internal function by writing this.function(), although just function() is fine.

This encourages making public something that probably was never intended to be.

The Operation

Armed with our reverse-engineered code, we could now put together attack parameters that would reach the delegatecall statement with our own callee. Once you reach a delegatecall, it’s game over! The callee gains full control of the contract’s identity and storage. It can do absolutely anything, including transferring the victim’s funds to an account of our choice.

But, of course, we don’t want to do that! We want to save the victim. And what’s the best way? Well, just destroy the means of attack, of course!

So, our actual attack does not involve the victim at all. We merely call selfdestruct on the enabler contract: the bot. The bot had no funds of its own, so nothing is lost by destroying it. To prevent re-deployment of an unfixed bot, we left a note on the Etherscan entry for the bot contract.

To really prevent deployment of a vulnerable bot, of course, one should commission the services of Dedaub. :-)

--

--