How fast is your MEV bot? Comparing Javascript/Python/Rust

A benchmark of the three most popular languages used in MEV. Which one should you go for?

Solid Quant
16 min readAug 26, 2023
Tamiya Minicars were so cool back then

Choosing a programming language for your projects can be a challenging task. In fact, individuals often have varying preferences when asked, citing reasons such as speed, readability, or simply familiarity with a particular language. This diversity in opinions is quite common.

I’ve encountered this situation multiple times in the past, which led me to learn six different programming languages. While it was an enjoyable experience, I eventually stopped using C# and Java.

In the MEV (Miner Extractable Value) space, four mainstream programming languages dominate: Python, JavaScript, Rust, and Golang.

Among these, I’ll focus on the first three and conduct a speed performance benchmark. This benchmark will simulate scenarios commonly seen in real-life trading situations. By doing so, readers can gain insight into the competitive nature of each language stack regarding MEV.

I’ve excluded Golang from this comparison for now. Despite its widespread use in the blockchain industry, Golang doesn’t yet boast as many open-source projects related to MEV compared to the other three languages. My intention is for this benchmark session to remain as mainstream as possible, taking into account language stack popularity and the libraries most likely to be utilized by developers.

How it’s done

Last week, I released three MEV templates written in Python, Javascript, and Rust:

Today, I’ll be utilizing the same codebase to benchmark essential methods used in MEV trading. These methods and tasks encompass:

  • Making simple HTTP requests to the node (retrieving blocks/transactions data)
  • Retrieving all logs from a newly created block
  • Multicall requests for retrieving reserves data on thousands of Uniswap V2 pools
  • Subscribing to streams and decoding these data (pending transactions)
  • Creating Flashbots bundles
  • Simulating / sending Flashbots bundles

As evident, these tasks mirror the routine activities performed by MEV searchers. It’s of paramount importance to comprehend the performance of our system while carrying out these specific tasks, as opposed to others.

In my initial stages, I lacked clarity on the potential performance of my MEV bot. So, instead of meticulously conducting individual benchmarks, I opted for a simpler approach by referring to websites like this:

It was very fun looking at the results here. One was:

Oh, nice. So Rust can say “hello world” at least 6x times faster than Python. And:

Okay, so Python’s performance relies heavily on the compiler/runtime, and it can sometimes go faster than Rust. But I guess what’s the point if it times out.

All this was very good, but it didn’t really tell us the whole story.

This is precisely the gap that today’s article aims to bridge for you.

Let’s start

Before delving into the task, I want to outline the specs of my 2019 Macbook pro machine that I’ll be using throughout this post:

  • Processor: 2.6GHz 6-core Intel Core i7
  • Memory: 32GB 2667 MHz DDR4

A fairly typical laptop to work with.

Additionally, I’m left with only 80GB of available storage (out of the 1TB), because I never clean up my disk, and leave all my past projects stored here. That’s pretty standard too, right?

As for interacting with the blockchain, I’ll be employing my local Ethereum full node. It’s important to note that I won’t be conducting these interactions directly from my machine, which will lead to slightly accelerated execution times for the showcased samples. Instead, I’ll be using the following endpoint to connect with the node from my MacBook:

http://192.168.200.182:8545

Lastly, the mev-templates Github repository should now be updated to have the benchmark files so readers can follow along:

Here are the language runtime/compiler versions I used:

  • Python: 3.10.12
  • Node.js: 18.16.0
  • Rustc/Cargo: 1.71.1

How we’ll benchmark performance

To begin, we will initiate the HTTP provider instance using the three languages and observe their respective execution times. Although this concise task may not significantly impact the bots’ overall performance, it serves as an illustrative example of our benchmarking approach.

✅ 1. Creating the HTTP provider

  • Javascript (mev-templates/javascript/benchmarks.js):

You can try running the same code by commenting out everything other than the first benchmark task, which is:

// 1. Create HTTP provider
s = microtime.now();
const provider = new ethers.providers.JsonRpcProvider(HTTPS_URL);
took = microtime.now() - s;
console.log(`1. HTTP provider created | Took: ${took} microsec`);

Try running the command (from within mev-templates directory):

▶️ cd javascript
▶️ node benchmarks.js

You’ll get an output that looks like:

1. HTTP provider created | Took: 1443 microsec

1443 µs (microsecond) = 1.443 ms (millisecond) = 0.001443 s (second)

Executing this code once doesn’t comprehensively consider network connectivity. Therefore, I will replicate the same code segment multiple times and calculate the average value to establish a more reliable final result.

Interestingly, after performing 30 iterations, I have made an intriguing observation:

1. HTTP provider created | Took: 1124 microsec
1. HTTP provider created | Took: 33 microsec
1. HTTP provider created | Took: 27 microsec
1. HTTP provider created | Took: 23 microsec
1. HTTP provider created | Took: 40 microsec
...

the initial creation takes the longest, and the creation time goes down drastically afterwards. I’ll simply remove the first try, and average out the next, which results to: 60 ~ 70 µs.

  • Python (mev-templates/python/benchmarks.py):

For Python, we do the same thing as above, we run the below:

###########################
# 1️⃣ Create HTTP provider #
###########################
s = time.time()
w3 = Web3(Web3.HTTPProvider(HTTPS_URL))
took = (time.time() - s) * 1000000
print(f'1. HTTP provider created | Took: {took} microsec')

by running (from mev-templates directory):

▶️ cd python
▶️ python benchmarks.py

We’ll try running this 30 times as well:

1. HTTP provider created | Took: 1351.1180877685547 microsec
1. HTTP provider created | Took: 1018.0473327636719 microsec
1. HTTP provider created | Took: 1032.114028930664 microsec
1. HTTP provider created | Took: 936.9850158691406 microsec
1. HTTP provider created | Took: 950.5748748779297 microsec
...

The notable contrast likely stems from the distinct implementations. The average time taken for the provider creation code in Python is approximately: 1100 µs.

However, there’s no need to feel disheartened yet. This is just the initial phase, and it involves the least critical piece of code that we’ll be benchmarking. In the majority of scenarios, the provider will be established prior to jumping into the core logic of our strategy. As a result, this particular code is unlikely to be executed while the bots are actively running.

  • Rust (mev-templates/rust/benches/benchmarks.rs):

Let’s try running Rust code. Leave the code below and comment out the rest:

// 1. Create HTTP provider
let s = Instant::now();
let client = Provider::<Http>::try_from(env.https_url.clone()).unwrap();
let client = Arc::new(client);
let took = s.elapsed().as_micros();
println!("1. HTTP provider created | Took: {:?} microsec", took);

We’ll run this 30 times just like before:

1. HTTP provider created | Took: 94 microsec
1. HTTP provider created | Took: 7 microsec
1. HTTP provider created | Took: 5 microsec
1. HTTP provider created | Took: 5 microsec
1. HTTP provider created | Took: 4 microsec
...

Okay, that was unexpected. I expected Rust to be extremely fast, but this is still surprising. The average time for Rust is: 8 µs.

⚡️ Now that you have a grasp of the benchmarking process, I won’t delve into explaining every task in meticulous detail, but instead will be providing the results for each task looking like this:

  • Javascript: 65 µs
  • Python: 1100 µs (= 1.1 ms)
  • Rust: 8 µs

Let’s dive into the more interesting tasks now.

✅ 2. Get Block information

  • Javascript: 18 ms (millisecond)
s = microtime.now();
let block = await provider.getBlock('latest');
took = (microtime.now() - s) / 1000;
console.log(`2. New block: #${block.number} | Took: ${took} ms`);
  • Python: 12 ms (millisecond)
s = time.time()
block = w3.eth.get_block('latest')
took = (time.time() - s) * 1000
print(f'2. New block: #{block["number"]} | Took: {took} ms')
  • Rust: 5 ms (millisecond)
use tokio::runtime::Runtime;

let rt = Runtime::new().unwrap();

let task = async {
let s = Instant::now();
let block = client.clone().get_block(BlockNumber::Latest).await.unwrap();
let took = s.elapsed().as_millis();
println!(
"2. New block: #{:?} | Took: {:?} ms",
block.unwrap().number.unwrap(),
took
);
};
rt.block_on(task);

✅ 3. Multicall request of 250 contract storage reads

  • Javascript: 163 ms (= 0.16 s)
// 5. Multicall test: calling 250 requests using multicall
let reserves;

s = microtime.now()
reserves = await getUniswapV2Reserves(HTTPS_URL, Object.keys(pools).slice(0, 250));
took = (microtime.now() - s) / 1000;
console.log(`5. Multicall result for ${Object.keys(reserves).length} | Took: ${took} ms`);
  • Python: 124 ms (= 0.124 s)
# 5️. Multicall test: calling 250 requests using multicall
s = time.time()
reserves = get_uniswap_v2_reserves(HTTPS_URL, list(pools.keys())[0:250])
took = (time.time() - s) * 1000
print(f'5. Multicall result for {len(reserves)} | Took: {took} ms')
  • Rust: 80 ms (= 0.08 s)
let task = async {
let factory_addresses = vec!["0xC0AEe478e3658e2610c5F7A4A2E1777cE9e4f2Ac"];
let factory_blocks = vec![10794229u64];
let pools = load_all_pools_from_v2(env.wss_url.clone(), factory_addresses, factory_blocks)
.await
.unwrap();

let s = Instant::now();
let reserves = get_uniswap_v2_reserves(env.https_url.clone(), pools[0..250].to_vec())
.await
.unwrap();
let took = s.elapsed().as_millis();
println!(
"5. Multicall result for {:?} | Took: {:?} ms",
reserves.len(),
took
);
};
rt.block_on(task);

✅ 4. Batch multicall request (3,774 calls)

  • Javascript: 1100 ms (= 1.1 s)
s = microtime.now();
reserves = await batchGetUniswapV2Reserves(HTTPS_URL, Object.keys(pools));
took = (microtime.now() - s) / 1000;
console.log(`5. Bulk multicall result for ${Object.keys(reserves).length} | Took: ${took} ms`);
  • Python: 1600 ms (= 1.6 s)

This is done using multiprocessing, which is why it can be slower than the Javascript version. (A proper comparison would require Python to do the same thing that Javascript/Rust versions are doing.)

s = time.time()
reserves = batch_get_uniswap_v2_reserves(HTTPS_URL, pools)
took = (time.time() - s) * 1000
print(f'5. Bulk multicall result for {len(reserves)} | Took: {took} ms')
  • Rust: 170 ms (= 0.17 s)
let task = async {
let factory_addresses = vec!["0xC0AEe478e3658e2610c5F7A4A2E1777cE9e4f2Ac"];
let factory_blocks = vec![10794229u64];
let pools = load_all_pools_from_v2(env.wss_url.clone(), factory_addresses, factory_blocks)
.await
.unwrap();

let s = Instant::now();
let reserves = batch_get_uniswap_v2_reserves(env.https_url.clone(), pools).await;
let took = s.elapsed().as_millis();
println!(
"5. Bulk multicall result for {:?} | Took: {:?} ms",
reserves.len(),
took
);
};
rt.block_on(task);

✅ 5. Pending transactions stream

This time, we’re going to try something new. We’ll open websocket connections using each template, and start streaming pending transactions data from the mempool (=txpool). Also, to compare the time that it takes for each language to:

  • receive data,
  • decode data,
  • retrieve transactions data with eth_getTransactionByHash,

basically everything required for us to deal with the pending transactions, we’ll log the real-time data to csv files, and in the end compare the time logged from all the templates and see if there’s a noticeable difference here.

  • Javascript:
const { streamPendingTransactions } = require('./src/streams');

function loggingEventHandler(eventEmitter) {
const provider = new ethers.providers.JsonRpcProvider(HTTPS_URL);
// Rust pending transaction stream retrieves the full transaction by tx hash.
// So we have to do the same thing for JS code.
let now;
let benchmarkFile = path.join(__dirname, 'benches', '.benchmark.csv');

eventEmitter.on('event', async (event) => {
if (event.type == 'pendingTx') {
try {
let tx = await provider.getTransaction(event.txHash);
now = microtime.now();
let row = [tx.hash, now].join(',') + '\n';
fs.appendFileSync(benchmarkFile, row, { encoding: 'utf-8' });
} catch {
// pass
}
}
});
}

async function benchmarkStreams(streamFunc, handlerFunc, runTime) {
let eventEmitter = new EventEmitter();

const wss = await streamFunc(WSS_URL, eventEmitter);
await handlerFunc(eventEmitter);

setTimeout(async () => {
await wss.destroy();
eventEmitter.removeAllListeners();
}, runTime * 1000);
}

async function benchmarFunction() {
// ...

let streamFunc;
let handlerFunc;

// 6. Pending transaction async stream
streamFunc = streamPendingTransactions;
handlerFunc = loggingEventHandler;
console.log('6. Logging receive time for pending transaction streams. Wait 60 seconds...');
await benchmarkStreams(streamFunc, handlerFunc, 60);

// ...
}
  • Python:
async def logging_event_handler(event_queue: aioprocessing.AioQueue):
w3 = Web3(Web3.HTTPProvider(HTTPS_URL))

f = open(BENCHMARK_DIR / '.benchmark.csv', 'w', newline='')
wr = csv.writer(f)

while True:
try:
data = await event_queue.coro_get()

if data['type'] == 'pending_tx':
_ = w3.eth.get_transaction(data['tx_hash'])
now = datetime.datetime.now().timestamp() * 1000000
wr.writerow([data['tx_hash'], int(now)])
except Exception as _:
break

async def benchmark_streams(stream_func: Callable,
handler_func: Callable,
run_time: int):

event_queue = aioprocessing.AioQueue()

stream_task = asyncio.create_task(stream_func(WSS_URL, event_queue, False))
handler_task = asyncio.create_task(handler_func(event_queue))

await asyncio.sleep(run_time)
event_queue.put(0)

stream_task.cancel()
handler_task.cancel()

#######################################
# 6️⃣ Pending transaction async stream #
#######################################
stream_func = stream_pending_transactions
handler_func = logging_event_handler
print('6. Logging receive time for pending transaction streams. Wait 60 seconds...')
asyncio.run(benchmark_streams(stream_func, handler_func, 60))
  • Rust:
pub async fn logging_event_handler(_: Arc<Provider<Ws>>, event_sender: Sender<Event>) {
let benchmark_file = Path::new("benches/.benchmark.csv");
let mut writer = csv::Writer::from_path(benchmark_file).unwrap();

let mut event_receiver = event_sender.subscribe();

loop {
match event_receiver.recv().await {
Ok(event) => match event {
Event::Block(_) => {}
Event::PendingTx(tx) => {
let now = Local::now().timestamp_micros();
writer.serialize((tx.hash, now)).unwrap();
}
},
Err(_) => {}
}
}
}

let task = async {
let ws = Ws::connect(env.wss_url.clone()).await.unwrap();
let provider = Arc::new(Provider::new(ws));

let (event_sender, _): (Sender<Event>, _) = broadcast::channel(512);

let mut set = JoinSet::new();

// try running the stream for n seconds
set.spawn(tokio::time::timeout(
std::time::Duration::from_secs(60),
stream_pending_transactions(provider.clone(), event_sender.clone()),
));

set.spawn(tokio::time::timeout(
std::time::Duration::from_secs(60),
logging_event_handler(provider.clone(), event_sender.clone()),
));

println!("6. Logging receive time for pending transaction streams. Wait 60 seconds...");
while let Some(res) = set.join_next().await {
println!("Closed: {:?}", res);
}
};
rt.block_on(task);

Running these streams for 60 seconds will result in:

  • mev-templates/javascript/benches/.benchmark.csv
  • mev-templates/python/benches/.benchmark.csv
  • mev-templates/rust/benches/.benchmark.csv

that have logs like below:

0x54d0df28b42c86a98ee20d3b27d24067f9e87e5be35eeb46b7935f329cdfdb7e,1693053836703765

this tells us the pending transaction hash we received, and at what time we were done with decoding the transaction we retrieved by calling eth_getTransactionByHash.

We now run:

import os
import pandas as pd

from pathlib import Path

_DIR = Path(os.path.dirname(os.path.abspath(__file__)))

def df_fmt(df: pd.DataFrame, name: str) -> pd.DataFrame:
df.columns = ['tx_hash', name]
df['tx_hash'] = df['tx_hash'].apply(lambda x: x.lower())
return df


if __name__ == '__main__':
js = df_fmt(pd.read_csv(_DIR / 'javascript/benches/.benchmark.csv', header=None), 'js')
py = df_fmt(pd.read_csv(_DIR / 'python/benches/.benchmark.csv', header=None), 'py')
rs = df_fmt(pd.read_csv(_DIR / 'rust/benches/.benchmark.csv', header=None), 'rs')

bench = js.merge(py, on='tx_hash').merge(rs, on='tx_hash')
bench['py - rs'] = bench['py'] - bench['rs']
bench['js - rs'] = bench['js'] - bench['rs']
bench['py - js'] = bench['py'] - bench['js']
bench = bench.drop_duplicates(['tx_hash'], keep='last')

bench.to_csv(_DIR / '.benchmark.csv', index=None)

to see how these templates performed.

🛑 The results for this will keep being investigated, so there may changes to the results in the future. The Python version turned out to be very unstable, because it was using the sync version of HTTProvider instead of the async version.

The results were quite interesting.

  1. Comparing Python / Rust in seconds:
X: pending tx id / Y: Python latency in seconds (compared to Rust)

It’s evident that, in the majority of instances, Python experiences slower access to pending transaction data compared to Rust. Nonetheless, intermittent spikes are observable, and these could potentially arise from scenarios where both the JavaScript and Rust versions simultaneously dispatch eth_getTransactionByHash requests to my Geth node. Alternatively, these variations might emerge due to the asynchronous nature of eth_getTransactionByHash calls in the two other versions, which is absent in Python. (This aspect will undergo further scrutiny, as the Python version's outcomes don't appear stable enough to warrant in-depth discussion at this point.)

2. Comparing Javascript / Rust in seconds:

X: pending tx id / Y: JS latency in seconds (compared to Rust)

The time difference observed between Javascript and Rust outcomes is notably more reliable. It’s apparent that, on average, Javascript streams encounter a delay of 0.018 seconds (18 ms) in accessing the same data compared to the Rust version.

However, recall that it took:

  • Javascript: 18 ms
  • Python: 12 ms
  • Rust: 5 ms

to obtain block information. Presuming a similar scenario with eth_getTransactionByHash, it's apparent that a significant portion of the latency arises from this particular task.

👉 Benchmarking websocket streams was very difficult to pull off. But we kind of get the picture here. The network latency won’t be too different for either one of these languages, so we expect to see a few ms difference between Rust and the others, and that’s what we see here with Javascript / Rust.

✅ 6. Retrieving touched pools on new block update

We’re going to try out something new again. This time, our focus shifts to initiating newHeads subscriptions from each template. Upon receiving data about a newly created block, we will extract all the logs associated with it. From these logs, we will narrow down the selection to those featuring the Sync event, allowing us to access the current reserves data from the relevant pools on Sushiswap V2.

🤙 We’ll also assume that the Python and Javascript versions experience a 20 ms delay in acquiring subscription data compared to the Rust version.

  • Javascript: 32 ms
let s = microtime.now();
let reserves = await getTouchedPoolReserves(provider, event.blockNumber);
let took = (microtime.now() - s) / 1000;
let now = Date.now();
console.log(`[${now}] Block #${event.blockNumber} ${Object.keys(reserves).length} pools touched | Took: ${took} ms`);

Running this along with the newHeads stream will result in something like:

  • Python: 20 ms
s = time.time()
block_number = data['block_number']
reserves = get_touched_pool_reserves(w3, block_number)
took = (time.time() - s) * 1000
now = datetime.datetime.now()
print(f'[{now}] Block #{block_number} {len(reserves)} pools touched | Took: {took} ms')
  • Rust: 10 ms
pub async fn touched_pools_event_handler(provider: Arc<Provider<Ws>>, event_sender: Sender<Event>) {
let mut event_receiver = event_sender.subscribe();

loop {
match event_receiver.recv().await {
Ok(event) => match event {
Event::Block(block) => {
let s = Instant::now();
match get_touched_pool_reserves(provider.clone(), block.block_number).await {
Ok(reserves) => {
let took = s.elapsed().as_millis();
let now = Instant::now();
println!(
"[{:?}] Block #{:?} {:?} pools touched | Took: {:?} ms",
now,
block.block_number,
reserves.len(),
took
);
}
Err(_) => {}
}
}
Event::PendingTx(_) => {}
},
Err(_) => {}
}
}
}

✅ 7. Sending bundles to Flashbots

If we rely on the mempool for transaction submissions, the time taken will be contingent on the network status of our local node.

However, using Flashbots is different. They offer a private RPC endpoint for users, and it’s the network connectivity here that becomes our focus.

Another point to make here is that, people heavily rely on the open-source Flashbots libraries available to them, which are:

This is why, in benchmarking the time required to send bundles to Flashbots, I took their examples and used them in our benchmarking, assuming that most people will readily take those examples and use them (much like myself 😙)

Sending bundles to Flashbots requires everyone to take three steps:

  • Signing the desired bundles,
  • Simulating the bundles on Flashbots’ server,
  • Finally, sending the simulated bundles to Flashbots.

We shall examine the time taken for these three steps by sending a bundle that transfers 0.001 ETH to myself. The fee attached to this bundle will be deliberately set at a very low but functional level — sufficient to pass all simulations but intentionally not chosen by the builders.

  • Javascript: 1.3 s
// 10. Sending Flashbots bundles
block = await provider.getBlock('latest');
blockNumber = block.number;
let nextBaseFee = calculateNextBlockBaseFee(block);
maxPriorityFeePerGas = BigInt(1);
maxFeePerGas = nextBaseFee + maxPriorityFeePerGas;

// Create/sign bundle
s = microtime.now();
let common = await bundler._common_fields();
amountIn = BigInt(parseInt(0.001 * 10 ** 18));
let tx = {
...common,
to: bundler.sender.address,
from: bundler.sender.address,
value: amountIn,
data: '0x',
gasLimit: BigInt(30000),
maxFeePerGas,
maxPriorityFeePerGas,
};
bundle = await bundler.toBundle(tx);
signedBundle = await bundler.flashbots.signBundle(bundle);
took = (microtime.now() - s) / 1000;
console.log(`- Creating bundle took: ${took} ms`);

// Simulating bundle
s = microtime.now();
const simulation = await bundler.flashbots.simulate(signedBundle, blockNumber);

if ('error' in simulation) {
console.warn(`Simulation Error: ${simulation.error.message}`)
return '';
} else {
console.log(`Simulation Success: ${JSON.stringify(simulation, null, 2)}`)
}
took = (microtime.now() - s) / 1000;
console.log(`- Running simulation took: ${took} ms`);

// Sending bundle
s = microtime.now();
const targetBlock = blockNumber + 1;
const replacementUuid = uuid.v4();
const bundleSubmission = await bundler.flashbots.sendRawBundle(signedBundle, targetBlock, { replacementUuid });

if ('error' in bundleSubmission) {
throw new Error(bundleSubmission.error.message)
}
took = (microtime.now() - s) / 1000;
console.log(`10. Sending Flashbots bundle ${bundleSubmission.bundleHash} | Took: ${took} ms`);
  • Creating bundle: 20 ms
  • Simulating bundle: 0.65 s
  • Sending bundle: 0.65 s
  • Python: 0.56 s
# Create/sign bundle
s = time.time()
common = bundler._common_fields
amount_in = int(0.001 * 10 ** 18)
tx = {
**common,
'to': bundler.sender.address,
'from': bundler.sender.address,
'value': amount_in,
'data': '0x',
'gas': 30000,
'maxFeePerGas': max_fee_per_gas,
'maxPriorityFeePerGas': max_priority_fee_per_gas,
}
bundle = bundler.to_bundle(tx)
took = (time.time() - s) * 1000
print(f'- Creating bundle took: {took} ms')

# Simulating bundle
s = time.time()
flashbots: Flashbots = bundler.w3.flashbots

try:
simulated = flashbots.simulate(bundle, block_number)
except Exception as e:
print('Simulation error', e)
took = (time.time() - s) * 1000
print(f'- Running simulation took: {took} ms')
# print(simulated)

# Sending bundle
s = time.time()
replacement_uuid = str(uuid4())
response: FlashbotsBundleResponse = flashbots.send_bundle(
bundle,
target_block_number=block_number + 1,
opts={'replacementUuid': replacement_uuid},
)

took = (time.time() - s) * 1000
total_took = (time.time() - _s) * 1000
print(f'10. Sending Flashbots bundle {response.bundle_hash().hex()} | Took: {took} ms')
  • Creating bundle: 14 ms
  • Simulating bundle: 0.3 s
  • Sending bundle: 0.25 s
  • Rust: 1.3 s
let bundler = Bundler::new();
let block = bundler
.provider
.get_block(BlockNumber::Latest)
.await
.unwrap()
.unwrap();
let next_base_fee = U256::from(calculate_next_block_base_fee(
block.gas_used.as_u64(),
block.gas_limit.as_u64(),
block.base_fee_per_gas.unwrap_or_default().as_u64(),
));
let max_priority_fee_per_gas = U256::from(1);
let max_fee_per_gas = next_base_fee + max_priority_fee_per_gas;

// Create/sign bundle
let s = Instant::now();
let common = bundler._common_fields().await.unwrap();
let to = NameOrAddress::Address(common.0);
let amount_in = U256::from(1) * U256::from(10).pow(U256::from(15)); // 0.001
let tx = Eip1559TransactionRequest {
to: Some(to),
from: Some(common.0),
data: Some(Bytes(bytes::Bytes::new())),
value: Some(amount_in),
chain_id: Some(common.2),
max_priority_fee_per_gas: Some(max_priority_fee_per_gas),
max_fee_per_gas: Some(max_fee_per_gas),
gas: Some(U256::from(30000)),
nonce: Some(common.1),
access_list: AccessList::default(),
};
let signed_tx = bundler.sign_tx(tx).await.unwrap();
let bundle = bundler.to_bundle(vec![signed_tx], block.number.unwrap());
let took = s.elapsed().as_millis();
println!("- Creating bundle took: {:?} ms", took);

// Simulating bundle
let s = Instant::now();
let simulated = bundler
.flashbots
.inner()
.simulate_bundle(&bundle)
.await
.unwrap();

for tx in &simulated.transactions {
if let Some(e) = &tx.error {
println!("Simulation error: {e:?}");
}
if let Some(r) = &tx.revert {
println!("Simulation revert: {r:?}");
}
}
let took = s.elapsed().as_millis();
println!("- Running simulation took: {:?} ms", took);

// Sending bundle
let s = Instant::now();
let pending_bundle = bundler
.flashbots
.inner()
.send_bundle(&bundle)
.await
.unwrap();

let took = s.elapsed().as_millis();
println!(
"10. Sending Flashbots bundle ({:?}) | Took: {:?} ms",
pending_bundle.bundle_hash, took
);
  • Creating bundle: 8 ms
  • Simulating bundle: 1.1 s
  • Sending bundle: 0.25 s

Conclusion

This was a very long post. We’ve gone over core functionalities in MEV bots, and benchmarked them using three different languages. Here’s a concise summary of the results:

  1. Getting block information:
  • JS (18 ms) / PY (12 ms) / RS (5 ms)

2. Single batch of multicall request:

  • JS (163 ms) / PY (124 ms) / RS (80 ms)

3. Batch multicall requests:

  • JS (1100 ms) / PY (1600 ms) / RS (170 ms)

4. Retrieve all events in a block and filter by Sync event:

  • JS (32 ms) / PY (20 ms) / RS (10 ms)

5. Simulating/sending bundles to Flashbots:

  • JS (1.3 s) / PY (0.56 s) / RS (1.3 s)

The performance results indicate that Rust excels at swiftly handling various tasks, particularly those involving asynchronous operations and CPU-bound processes.

However, I’m a bit confused as to why the Flashbots part is such a great bottleneck for the language. I’ve tested this multiple times these past few days, and the results always come out the same way. (Removing the simulation part will put it on a par with the Python version)

If anyone finds an improvement to the current implementation, I’m eager to receive input! 😃 I’ll retry the benchmarks again and see how it gets improved.

The results were nonetheless very helpful, it showed us that network bottlenecks were the biggest culprit slowing down our bots.

⚡️ Come join our Discord community to take this journey together. We’re actively reviewing the code used in these blog posts to guarantee safe usage by all our members. Though still in its infancy, we’re slowly growing and collaborating on research/projects in the 💫 MEV space 🏄‍♀️🏄‍♂️:

Also, for people that want to reach out to me, they can e-mail me directly at: solidquant@gmail.com

--

--