Gas-less way to purchase ETH for USDC
Part 5 — Backend server
Previous parts
- Part 1- the system design
- Part 2 — Gas Broker contract
- Part 3 — Testing
- Part 4 — User interface for customer
Source code:
UI and backend API :
Smart contract:
Why backend server is needed
Even though we are building decentralized application it still needs some off-chain infrastructure. However backend server is not an essential part, it is needed mostly for convenience — to help Customers and Gas providers found each other. There is nothing wrong about having off-chain centralized parts in Dapps. For example — MetaMask users are dependent on Infura provider, otherwise they’ll have to maintain full node.
What backend server should do
Backend server have a function of centralized storage of pending orders — that’s how Customers and Gas Providers will discover each other. It only serves as a messenger — swaps could be completed without backend API if Customer will share his order with Gas Provider using any other communication channel (for example copy-paste order json object from browser console and send it to Gas Provider via Telegram)
Here is the overview of functionality provided by Backend Server
- Receive orders from Customers via API
- Validate incoming orders
- Store orders those passed validation
- Provide list of pending orders via API (for humans)
- Broadcast new orders to subscribers (for bots)
- Periodically clean up orders list by removing closed and expired orders
Receive orders from Customers via API
Let’s create an endpoint for receiving incoming orders:
File pages/api/order
import type { NextApiRequest, NextApiResponse } from 'next'
import storage from '../../services/sqliteStorage'
import validator from '../../services/validator'
import { Status } from '../../services/storage'
type ResponseData = {
status: string
}
export default async function handler(
req: NextApiRequest,
res: NextApiResponse<ResponseData>
) {
if (req.method !== 'POST') {
return res.status(404)
}
const { order, errors, isValid } = await validator.validate(req.body)
if (!isValid) {
return res.status(400).json({ status: 'BAD REQUEST', errors })
}
try {
const { status } = await storage.store(order)
if (status === Status.SUCCESS) {
res?.socket?.server?.io?.emit("message", order)
res.status(201).json({ status: 'SUCCESS' })
} else {
throw new Error('Failed to create order')
}
} catch (error) {
console.error(error)
res.status(500).json({ status: 'FAILURE' })
}
}
First the order is validated using validator
service, then order is stored and broadcasted to subscribers via SocketIO
Broadcast new orders to subscribers
As you might noticed each order is published via 2 different ways — via API and via SocketIO message. API is needed for manual browsing of open orders, it could also be used by bots but there is a problem. Each bot wants to get newly published orders as soon as possible in order submit his transaction instantly and outrun other bots. If API is the only way to receive new orders then each bot will make requests to API endpoint with high frequency. This is equal to DDos attack in case of large number of bots. That’s why bots will be notified about new orders via SocketIO event — this will reduce the network traffic and load on Backend Server
Validate incoming orders
To prevent flood attack (when attacker trying to overwhelm server by large number of requests with randomly generated payload) each incoming order needs to be validated before broadcasting.
There are 2 types of validation we should run to make sure the order is not fake:
- schema validation
- validation against on-chain data
Zod package is used for schema validation. Let’s define the schema for order object:
const schema = z.object({
signer: z.string().regex(ACCOUNT_ADDRESS_REGEX),
token: z.string().regex(ACCOUNT_ADDRESS_REGEX),
value: z.number().min(0),
deadline: z.number().min(MIN_DEADLINE),
reward: z.number().min(0),
permitSignature: z.string(SIGNATURE_REGEX),
rewardSignature: z.string(SIGNATURE_REGEX)
})
ACCOUNT_ADDRESS_REGEX
,SIGNATURE_REGEX
and MIN_DEADLINE
are defined in config file:
export const MIN_DEADLINE = 1698709687
export const ACCOUNT_ADDRESS_REGEX = /(0x[a-fA-F0-9]{40})/g
export const SIGNATURE_REGEX = /(0x[a-fA-F0-9]{130})/g
MIN_DEADLINE
should be set to the timestamp of latest block at the moment of deployment and if will fail the orders those are already expired at the moment of submissions (later on this will also be checked agains on-chain data)
If the format of order is correct the next step is validation against on-chain data. This validation is important cause if Gas Provider will call swap
transaction with invalid data, he will lose gas fee. Backend API should make sure that order list contains only valid transactions.
In order to validate incoming order without spending transaction fee let’s add 2 view functions to GasBroker
contract — one will validate permit signature and another will validate reward signature:
function verifyPermit(
address signer,
ERC20 token,
uint256 value,
uint256 deadline,
uint8 permitV,
bytes32 permitR,
bytes32 permitS
) external view returns (string memory) {
if (deadline < block.timestamp) return "PERMIT_DEADLINE_EXPIRED";
if (token.balanceOf(signer) < value) return "INSUFFICIENT_BALANCE";
// Unchecked because the only math done is incrementing
// the owner's nonce which cannot realistically overflow.
uint256 nonce = token.nonces(signer);
unchecked {
address recoveredAddress = ecrecover(
keccak256(
abi.encodePacked(
"\x19\x01",
token.DOMAIN_SEPARATOR(),
keccak256(
abi.encode(
keccak256(
"Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)"
),
signer,
address(this),
value,
nonce,
deadline
)
)
)
),
permitV,
permitR,
permitS
);
if (recoveredAddress != address(0) && recoveredAddress == signer) return "VALID";
return "INVALID";
}
}
This function is almost identical to permit
function from solmate implementation of ERC20 contract:
The only difference is that state change been removed and function became view
verifyReward
function is already a part of Gas Broker contract, let’s just make it public:
function verifyReward(
address signer,
uint256 value,
bytes32 permitHash,
uint8 sigV,
bytes32 sigR,
bytes32 sigS
) public view returns (bool) {
return signer == ecrecover(hashReward(Reward(value, permitHash)), sigV, sigR, sigS);
}
Having these 2 functions now we can predict if swap
function will succeed or revert and this check wouldn’t cost any gas fees:
const { signer, token, value, deadline, reward, permitSignature, rewardSignature } = response.data
// validate using on-chain data
const [permitV, permitR, permitS] = splitSignature(permitSignature)
const [rewardV, rewardR, rewardS] = splitSignature(rewardSignature)
const permitHash = keccak256(permitSignature)
const verifyPermit = publicClient.readContract({
address: GAS_BROKER_ADDRESS,
abi: gasBrokerABI,
functionName: 'verifyPermit',
args: [
signer,
token,
value,
deadline,
permitV,
permitR,
permitS
]
})
const verifyReward = publicClient.readContract({
address: GAS_BROKER_ADDRESS,
abi: gasBrokerABI,
functionName: 'verifyReward',
args: [
signer,
reward,
permitHash,
rewardV,
rewardR,
rewardS
]
})
try {
const [permitStatus, isRewardValid] = await Promise.all([verifyPermit, verifyReward])
if (permitStatus !== 'VALID') {
return {
isValid: false,
errors: permitStatus
}
}
if (!isRewardValid) {
return {
isValid: false,
errors: 'Reward signature is invalid'
}
}
} catch (errors) {
console.log(errors)
return {
isValid: false,
errors
}
}
The complete code of validator
service:
import { z } from 'zod'
import { secp256k1 } from '@noble/curves/secp256k1'
import { Order } from './storage'
import { GAS_BROKER_ADDRESS } from '../config'
import { MIN_DEADLINE, ACCOUNT_ADDRESS_REGEX, SIGNATURE_REGEX } from '../constants'
import { defineChain, createPublicClient, http, keccak256, toHex, fromHex } from 'viem'
import { mainnet } from 'viem/chains'
import gasBrokerABI from '../resources/gasBrokerABI.json' assert { type: 'json' }
export const localFork = defineChain({
id: 1,
name: 'Local',
network: 'local',
nativeCurrency: {
decimals: 18,
name: 'Ether',
symbol: 'ETH',
},
rpcUrls: {
default: {
http: ['http://127.0.0.1:8545']
}
}
})
interface ValidationResult {
isValid: boolean,
errors?: any,
order?: Order
}
const schema = z.object({
signer: z.string().regex(ACCOUNT_ADDRESS_REGEX),
token: z.string().regex(ACCOUNT_ADDRESS_REGEX),
value: z.number().min(0),
deadline: z.number().min(MIN_DEADLINE),
reward: z.number().min(0),
permitSignature: z.string(SIGNATURE_REGEX),
rewardSignature: z.string(SIGNATURE_REGEX)
})
export const publicClient = createPublicClient({
chain: (process.env.NODE_ENV === 'development') ? localFork : mainnet,
transport: http()
})
function splitSignature(signatureHex: string) {
const { r, s } = secp256k1.Signature.fromCompact(signatureHex.slice(2, 130))
const v = fromHex(`0x${signatureHex.slice(130)}`, 'number')
return [v, toHex(r), toHex(s)]
}
class Validator {
async validate(input: {[key: string]: any}): Promise<ValidationResult> {
// validate schema
const response = schema.safeParse(input);
if (!response.success) {
return {
isValid: false,
errors: response.error.errors
}
}
const { signer, token, value, deadline, reward, permitSignature, rewardSignature } = response.data
// validate using on-chain data
const [permitV, permitR, permitS] = splitSignature(permitSignature)
const [rewardV, rewardR, rewardS] = splitSignature(rewardSignature)
const permitHash = keccak256(permitSignature)
const verifyPermit = publicClient.readContract({
address: GAS_BROKER_ADDRESS,
abi: gasBrokerABI,
functionName: 'verifyPermit',
args: [
signer,
token,
value,
deadline,
permitV,
permitR,
permitS
]
})
const verifyReward = publicClient.readContract({
address: GAS_BROKER_ADDRESS,
abi: gasBrokerABI,
functionName: 'verifyReward',
args: [
signer,
reward,
permitHash,
rewardV,
rewardR,
rewardS
]
})
try {
const [permitStatus, isRewardValid] = await Promise.all([verifyPermit, verifyReward])
if (permitStatus !== 'VALID') {
return {
isValid: false,
errors: permitStatus
}
}
if (!isRewardValid) {
return {
isValid: false,
errors: 'Reward signature is invalid'
}
}
} catch (errors) {
console.log(errors)
return {
isValid: false,
errors
}
}
return {
isValid: true,
order: {
...response.data,
signer: signer.toLowerCase(),
token: token.toLowerCase(),
permitSignature: permitSignature.toLowerCase(),
rewardSignature: rewardSignature.toLowerCase()
}
}
}
}
export default new Validator()
Store orders those passed validation
Let’s first create an abstract class for storage:
interface Order {
signer: string,
token: string,
value: BigInt,
deadline: number,
reward: BigInt,
permitSignature: string,
rewardSignature: string
}
export enum Status {
SUCCESS,
FAILURE
}
interface Result {
status: STATUS,
error?: any
}
interface Page<Type> {
offset: number;
count: number;
total: number;
data: Type[]
}
export interface Pagination {
offset?: number;
limit?: number;
}
interface Range<Type> {
from: Type,
to: Type
}
export interface Filter {
signers?: string[],
tokens?: string[],
value?: Range<BigInt>,
deadline?: Range<number>,
reward: Range<BigInt>
}
export abstract class Storage {
abstract async store(order: Order): Promise<Result>
abstract async find(filter: Filter, pagination: Pagination): Promise<Page<Order>>
abstract async cleanUp(timestamp: BigInt, closedOrders: string[])
}
It has 3 functions to store orders, search and cleanUp expired and closed orders
Sqlite file storage would be good enough for MVP. Here is an implementation:
import { Sequelize, DataTypes, Op } from 'sequelize'
import { Storage, Status, Filter, Pagination } from './storage'
import { keccak256 } from 'viem'
import { ORDER_MAX_TTL_SEC, DB_FILE } from '../config'
import path from 'path'
const sequelize = new Sequelize({
storage: DB_FILE,
dialect: 'sqlite'
});
const Order = sequelize.define('Order', {
permitHash: {
type: DataTypes.STRING,
primaryKey: true
},
signer: DataTypes.STRING,
token: DataTypes.STRING,
value: DataTypes.INTEGER,
deadline: DataTypes.INTEGER,
reward: DataTypes.INTEGER,
permitSignature: DataTypes.STRING,
rewardSignature: DataTypes.STRING
})
const Block = sequelize.define('Block', {
timestamp: DataTypes.NUMBER
})
class SqliteStorage extends Storage {
synced: boolean = false
async sync() {
if (!this.synced) {
await Order.sync()
await Block.sync()
this.synced = true
}
}
async store(order: Order): Promise<Result> {
await this.sync()
const permitHash = keccak256(order.permitSignature)
const existingOrder = await Order.findOne({
where: {
permitHash
}
})
if (existingOrder) {
return {
status: Status.FAILURE
}
}
try {
await Order.create({
...order,
permitHash
})
return {
status: Status.SUCCESS
}
} catch (error) {
console.log(error)
return {
status: Status.FAILURE
}
}
}
getRangeQuery(from, to) {
return {
...((from && to) ? {[Op.between]: [from, to]} : {}),
...((from && !to) ? {[Op.gte]: from} : {}),
...((!from && to) ? {[Op.lte]: to} : {})
}
}
async find(filter: Filter, pagination: Pagination): Promise<Page<Order>> {
await this.sync()
const result = await Order.findAndCountAll({
where: {
...(filter.signers ? {signer: filter.signers} : {}),
...(filter.tokens ? {token: filter.tokens} : {}),
...(filter.value ? { value: this.getRangeQuery(filter.value.from, filter.value.to) } : {}),
...(filter.deadline ? { deadline: this.getRangeQuery(filter.deadline.from, filter.deadline.to) } : {}),
...(filter.reward ? { reward: this.getRangeQuery(filter.reward.from, filter.reward.to) } : {}),
},
...pagination
})
return result
}
async getLatestBlock() {
await this.sync()
return (await Block.max('timestamp')) || 16805030
}
async cleanUp(timestamp: BigInt, closedOrders: string[]) {
await this.sync()
await Order.destroy({
where: {
[Op.or]: {
permitHash: closedOrders,
deadline: {
[Op.lt]: timestamp
},
createdAt: {
[Op.lt]: new Date(Date.now() - ORDER_MAX_TTL_SEC * 1e3)
}
}
}
})
await Block.create(timestamp)
}
}
export default new SqliteStorage()
This code is self-explanatory, otherwise I refer you to Sequlize documentation:
The only thing I should explain here is using permitHash
as a primary key — this is done for optimization. Gas Broker contract emits Swap
event once order is closed and the parameter of this event is permitHash
— since permitHash
is primary key it’s easy to remove all closed orders
ORDER_MAX_TTL_SEC
is a maximal lifetime for order — if no-one executed the order within this timeframe then this order is likely not profitable and should be removed from list
Periodically clean up orders list
Without cleanup orders list will grow infinitely. In order to found closed orders we have to browse the Gas Broker contract logs, make a list of completed since last check orders and remove those orders from database.
This should be a periodical task, however NextJS doesn’t provide out-of the box functionality for scheduler and background jobs. The recommended way to handle this is to use external scheduler that will trigger specific endpoint in NextJS app. Let’s implement api/cleanup
endpoint that will be triggered by scheduler with some interval:
import { parseAbiItem } from 'viem'
import { publicClient } from '../../services/validator'
import storage from '../../services/sqliteStorage'
import { GAS_BROKER_ADDRESS } from '../../config'
export default async function handler(
req: NextApiRequest,
res: NextApiResponse<ResponseData>
) {
if (req.method !== 'POST') {
return res.status(404)
}
const latestCheckedBlock = await storage.getLatestBlock()
const logs = await publicClient.getLogs({
address: GAS_BROKER_ADDRESS,
event: parseAbiItem('event Swap(bytes32 permitHash)'),
fromBlock: `${latestCheckedBlock}`,
toBlock: 'latest'
})
const closedOrders = logs.map(event => event.args.permitHash)
const block = await publicClient.getBlock()
console.log(`Cleaning orders starting from block ${latestCheckedBlock}`)
await storage.cleanUp(block.timestamp, closedOrders)
res.status(200).json(logs)
}
List of logs for Gas Broker contract will tell us which orders been executed by Gas Providers since last check:
const logs = await publicClient.getLogs({
address: GAS_BROKER_ADDRESS,
event: parseAbiItem('event Swap(bytes32 permitHash)'),
fromBlock: `${latestCheckedBlock}`,
toBlock: 'latest'
})
In order to generate logs let’s add Swap
event to Gas Broker contract:
event Swap(bytes32 permitHash);
This event will be emitted at the end of swap
function
Each time we do cleanUp the timestamp of latest block will be stored in database and the next search will be started form this block — getLatestBlock()
function fetches the timestamp of block where the latest invocation of cleanUp stopped
Once we have the list of closed orders and current block timestamp we can delete orders from database:
async cleanUp(timestamp: BigInt, closedOrders: string[]) {
await this.sync()
await Order.destroy({
where: {
[Op.or]: {
permitHash: closedOrders,
deadline: {
[Op.lt]: timestamp
},
createdAt: {
[Op.lt]: new Date(Date.now() - ORDER_MAX_TTL_SEC * 1e3)
}
}
}
})
await Block.create(timestamp)
}
All groups of expired orders are deleted by single request:
permitHash: closedOrders
— condition for closed orders[Op.lt]: timestamp
— condition for orders wherepermit
message is expired[Op.lt]: new Date(Date.now() — ORDER_MAX_TTL_SEC * 1e3)
— condition for orders those haven’t been picked by Gas Providers for a long time
Now let’s make a test of cleanup function:
We have a pending order in list:
{
"permitHash": "0x65bdd162f5e9e1f5f5daffd44cfc36ec39df6c13528096f97cb2b65e28319a26",
"signer": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
"token": "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48",
"value": 100000000,
"deadline": 1699135913,
"reward": 10000000,
"permitSignature": "0xbdc38fd4d9ab3d425a7b781d568cdf55aca88bf9a44aa6dae9c3bf9b25598e0d6d80169dc646c7620c11a2e72708d3d627672076ad7f9de005804eae1ba5c7bf1b",
"rewardSignature": "0x6c8a6cbfecdff14c5cfa4a43830a2c94cdc298b777b05c30d39dbef0b519af15348d990fef9ce2a5321cbfbbaf14fe9c2e149cf27e7cad8836a210f708ec41321b",
"createdAt": "2023-11-04T20:20:00.469Z",
"updatedAt": "2023-11-04T20:20:00.469Z"
}
Let’s execute this order using cast:
cast send 0x3AeEBbEe7CE00B11cB202d6D0F38D696A3f4Ff8e --private-key [privateKeyOfGasProvider] \
"swap(address,address,uint256,uint256,uint256,uint8,bytes32,bytes32,uint8,bytes32,bytes32)" \
"0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266" "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" 100000000 1699135913 10000000 \
0x1b 0xbdc38fd4d9ab3d425a7b781d568cdf55aca88bf9a44aa6dae9c3bf9b25598e0d 0x6d80169dc646c7620c11a2e72708d3d627672076ad7f9de005804eae1ba5c7bf \
0x1b 0x6c8a6cbfecdff14c5cfa4a43830a2c94cdc298b777b05c30d39dbef0b519af15 0x348d990fef9ce2a5321cbfbbaf14fe9c2e149cf27e7cad8836a210f708ec4132 \
--value 1ether \
--rpc-url http://127.0.0.1:8545
Now let’s run a cleanUp request:
curl --location --request POST 'http://localhost:3000/api/cleanup'
We can see in server logs that Swap
event been fetched:
{
address: '0x3aeebbee7ce00b11cb202d6d0f38d696a3f4ff8e',
topics: [
'0xea95e17d6b2b24aca4140a312447dbe4d5d4d14b1ce5c7f7d53d32d0d99fb70e'
],
data: '0x65bdd162f5e9e1f5f5daffd44cfc36ec39df6c13528096f97cb2b65e28319a26',
blockHash: '0x697d05d102c5091a3903224d79a509aa7b65c6455342e8bec219b96b1d093696',
blockNumber: 18474181n,
transactionHash: '0xdd41ab27de6c54d32c85292a424556fa536e6652a57f039c91907fdf3b159a14',
transactionIndex: 0n,
logIndex: 3n,
transactionLogIndex: '0x3',
removed: false,
args: {
permitHash: '0x65bdd162f5e9e1f5f5daffd44cfc36ec39df6c13528096f97cb2b65e28319a26'
},
eventName: 'Swap'
}
And sql query is executed:
Executing (default): DELETE FROM `Orders` WHERE (`permitHash` IN ('0x37270cfc9ffd63660722b4f9326a4771cb4722e74e364269ed399603de2798f0', '0x65bdd162f5e9e1f5f5daffd44cfc36ec39df6c13528096f97cb2b65e28319a26') OR `deadline` < 1699129389 OR `createdAt` < '2023-11-04 20:19:15.114 +00:00')
The closed order been deleted from database
Now backend server is ready and could be used by Gas Providers
To be continue
In the next part I will create UI for Gas Providers so they can see pending orders and execute them using MetaMask