The StarkNet Stack

Starknet Edu
Published in
7 min readJan 23, 2023


The goal of this article(and repository) is to use a layered tech stack to create a conceptual model for StarkNet. To better understand our StarkNet stack we will also operate the protocols locally.

StarkNet is a Layer 2 Validity Rollup which operates “on top” of Ethereum Layer 1. At a conceptual level we can think of this as a stack of protocols that separates concerns in the various layers. A simple analog is the model used to represent your internet connection the OSI Model:

StarkNet Stack

These models can be split in various ways for instance OSI vs TCP/IP as you can see above. The correct model is the model that works best for your understanding. Guilty Gyoza separates the verifiable computing stack stack as follows:

A simplified model for the modular blockchain stack may look something like this:


There are various hardware specs including packaged options that will enable you to run an Ethereum node from home. Our goal is to build the cheapest StarkNet stack possible:


  • CPU: 2+ cores
  • RAM: 4 GB
  • Disk: 600 GB
  • Connection Speed: 8 mbps/sec


  • CPU: 4+ cores
  • RAM: 16 GB+
  • Disk 2 TB
  • Connection Speed: 25+ mbps/sec


OS(recommended): Ubuntu LTS, Docker , and Docker Compose.

sudo apt install -y jq curl net-tools

Layer 1: Data Layer

We will start at the bottom of the stack the data layer. Here our L2 leverages the L1 for proof verification and data availability.

StarkNet leverages Ethereum as its L1 so let’s walk through setting up an Ethereum Full Node.

As this is the data layer I’m sure you can guess what the hardware bottleneck will be: disk storage.

This is why it’s imperative to have a high capacity I/O SSD as opposed to a HDD. Ethereum Nodes require both an Execution Client and a Consensus Client and communication looks as follows:

Execution Clients:

  • Geth
  • Erigon
  • Besu (used here)
  • Nethermind
  • Akula

Consensus Clients:

  • Prysm
  • Lighthouse (used here)
  • Lodestar
  • Nimbus
  • Prysm
  • Teku

Our Besu/Lighthouse node will take ~ 600 GB of disk space so navigate to a partition on your machine with sufficient capacity and run:

git clone
cd starknet-stack
docker compose -f dc-l1.yaml up -d

This will begin the fairly long process of spinning up our Consensus Client, Execution Client, and syncing them to the current state of the Goerli Testnet. If you would like to see the logs from either process you can run:

# tail besu logs
docker container logs -f $(docker ps | grep besu | awk '{print $1}')

# tail lighthouse logs
docker container logs -f $(docker ps | grep lighthouse | awk '{print $1}')

Lets make sure that everything that should be listening is listening:

# should see all ports in commanad output

# besu ports
sudo netstat -lpnut | grep -E '30303|8551|8545'

# lighthouse ports
sudo netstat -lpnut | grep -E '5054|9000'

We’ve used docker to abscract a lot of the nuance of running a Eth L1 node, but the important things to note are how the two processes EL/CL point to each other and communicate via JSON-RPC:

image: sigp/lighthouse:latest
container_name: lighthouse
- ./l1_consensus/data:/root/.lighthouse
- ./secret:/root/secret
network_mode: "host"
- lighthouse
- beacon
- --network=goerli
- --metrics
- --checkpoint-sync-url=
- --execution-endpoint=
- --execution-jwt=/root/secret/jwt.hex

image: hyperledger/besu:latest
container_name: besu
- ./l1_execution/data:/var/lib/besu
- ./secret:/var/lib/besu/secret
network_mode: "host"
- --network=goerli
- --rpc-http-enabled=true
- --data-path=/var/lib/besu
- --data-storage-format=BONSAI
- --sync-mode=X_SNAP
- --engine-rpc-enabled=true
- --engine-jwt-enabled=true
- --engine-jwt-secret=/var/lib/besu/secret/jwt.hex

Layer 2: Execution Layer

StarkNet leverages the provable programming language Cairo to express smart contract execution within the blocks of the chain. The StarkNet OS and the CairoVM work in tandem to execute these smart contracts. StarkNet uses a similar JSON-RPC spec as Ethereum in order to interact with the execution layer. For more intricacies in why we need blockchains and how blockchains work checkout our StarkNet Primer.

In order to stay current with the propagation of the StarkNet blockchain we need a client similar to Besu that we are using for L1. The efforts to provide full nodes for the StarkNet ecosystem are:

Check that your L1 has completed its sync:

# check goerli etherscan to make sure you have the latest block

curl --location --request POST 'http://localhost:8545' \
--header 'Content-Type: application/json' \
--data-raw '{

Start your L2 Execution Client and note that we are syncing StarkNet’s state from our LOCAL ETH L1 NODE!


# from starknet-stack project root
docker compose -f dc-l2.yaml up -d

To follow the sync:

docker container logs -f $(docker ps | grep pathfinder | awk '{print $1}')

StarkNet Testnet_1 currently comprises ~600,000 blocks so this will take some time to sync fully. To check L2 sync:

# compare `current_block_num` with `highest_block_num`

curl --location --request POST 'http://localhost:9545' \
--header 'Content-Type: application/json' \
--data-raw '{

To check data sizes:

sudo du -sh ./* | sort -rh

Layer 3: Application Layer

We see the same need for data refinement as we did in the OSI model. On L1 packets come over the wire in a raw stream of bytes and are then processed and filtered by higher-level protocols. When designing a decentralized application Bob will need to be cognizant of interactions with his contract on chain, but doesn’t need to be aware of all the information occurring on StarkNet.

This is the role of an indexer. To process and filter useful information for an application. Information that an application MUST be opinionated about and the underlying layer MUST NOT be opinionated about.

Indexers provide applications flexibility as they can be written in any programming language and have any data layout that suites the application.

To start our toy indexer run:


Again notice that we don’t need to leave our local setup for these interactions (http://localhost:9545).

Layer 4: Transport Layer

Once critical application information has been parsed and indexed the application will often times employ some state change based on the information. This is where the final layer comes in and the application must communicate the desired state change to the Layer 2 sequencer in order to get that change into a block. This is achieved via the same full-node/RPC spec implementation(in our case via Pathfinder).

With our local StarkNet stack we could invoke a transaction locally and it would look like this:

curl --location --request POST 'http://localhost:9545' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"method": "starknet_addInvokeTransaction",
"params": {
"invoke_transaction": {
"type": "INVOKE",
"max_fee": "0x4f388496839",
"version": "0x0",
"signature": [
"contract_address": "0x23371b227eaecd8e8920cd429d2cd0f3fee6abaacca08d3ab82a7cdd",
"calldata": [
"entry_point_selector": "0x15d40a3d6ca2ac30f4031e42be28da9b056fef9bb7357ac5e85627ee876e5ad"
"id": 0

As this involves setting up a local wallet and signing the transaction we will use a browser wallet and StarkScan for simplicity.

Navigate to the contract on StarkScan and Connect to Wallet.

Enter a new_value and Write the transaction:

Once the transaction is accepted on the L2 execution layer we should see the event data come through our application layer indexer!

Example Indexer Output:

Pulled Block #: 638703
Found transaction: 0x2053ae75adfb4a28bf3a01009f36c38396c904012c5fc38419f4a7f3b7d75a5
Events to Index:
"from_address": "0x806778f9b06746fffd6ca567e0cfea9b3515432d9ba39928201d18c8dc9fdf",
"keys": [
"data": [
"from_address": "0x126dd900b82c7fc95e8851f9c64d0600992e82657388a48d3c466553d4d9246",
"keys": [
"data": [
"from_address": "0x49d36570d4e46f48e99674bd3fcc84644ddd6b96f7c741b1562b82f9e004dc7",
"keys": [
"data": [

Once the transaction is accepted on L1 we can query the StarkNet Core Contracts from our L1 node to see the storage keys that have been updated on our data layer!

🎉🎉 With that we have traversed the whole StarkNet Stack 🎉🎉


These conceptual models are great as they can be refactored, reformed and nested to get a clearer understanding of how a platform operates. For instance the OSI Model underpins our Modular Stack.

With concepts like Fractal Scaling we can extend our model to include Layer 3s. You can imagine recurring the whole stack above our existing stack:

In the same way that L2 compresses its transaction throughput into a proof and state change written to L1, we can do the same compression at L3 and prove/write to L2 giving us more control over the protocol rules and higher compression ratios.