Build Substrate 10x faster
While Rust projects can take time to compile, you usually want an experience more like web development where you have shorter durations between running your code in the wild, finding an error, fixing those errors, and then seeing your results deployed faster.
The primary goal of this tutorial is to get your compiling times for Substrate down from 1 hour or more to under 10 minutes, maybe less. My best was around 3 minutes achieved using a cached build or with minimal changes. (The major downside I find is that the build-cache approach, while quicker, cannot be shared team-wide, just per-project-repository.)
Anyway, we can still save a huge amount of time and if you are Substrate runtime or Ink! developer without incredible local compute power, now let’s dive in:
This tutorial is inspired by Chevdor’s tutorial, with a major change to include more on-demand instancing, resulting in cutting costs and using compiled binary on OSX. Otherwise, it’s mostly the same.
Assumptions
- The user is locally on OSX, the remote server is Linux based in Europe.
Requirements
- Rust environment
- Google Cloud Compute CLI
gcloud
account set. You can set it yourself now if you don't have it - Enabled Compute Engine API — may take a while if you enabled it just now.
To make it easier to work with gcloud
for this case, it's nice to set our projects. It can be re-set anytime whenever you want it.
❯ gcloud projects list
PROJECT_ID NAME PROJECT_NUMBER
api-project-50447965076 coweb-testbed 50447965076
cargo1 cargo1 595539940012
coweb-bc478 coweb 162173137857
edgeware-cloud-infra Edgeware Cloud Infra 1079489857434
In my case it’s edgeware-cloud-infra
gcloud config set project edgeware-cloud-infra
Now we need to tune our local system. We just install two utils.
We will use rsync
to transport artifacts from local to remote and back and cargo-remote
to fetch the git repository on the server and compile it there.
brew install rsync
As we assume you have Rust installed, you can run this
cargo install --git https://github.com/sgeisler/cargo-remote
Instances
Using preemptible machines, we can achieve ~70% discount, with the downside that your instance won’t run longer than 24-hours and can terminated sooner by changing the underlying architecture.
Speaking of costs, for an instance with compute-optimized 16 vCPU, 64G ram, the costs for this are $0.753 hourly. With preemptibility we drop to $0.217 hourly, the difference is $0.536 per hour, an awesome discount for what we need.
Preparing Alpha Image
We will first create an instance where will prepare the bootdisk and then we will convert that to an alpha image which we will reuse every time we will spinup the instance. Here we create a 50GB disk on SSD with 16vCPU:
gcloud compute instances create instanceimage --boot-disk-type=pd-ssd --boot-disk-size=50GB --machine-type=c2-standard-16 --zone=europe-west4-a --preemptible
ssh to the machine. Remember IP address is ephemeral, so it may change every time you cycle create/delete the instance
ssh ybdaba@34.91.110.161
When you are logged in your instance, you’ll need to get install rsync
Rust
remote-cargo
toolchain.
We can do it with a simple one-liner:
sudo apt-get install rsync && curl https://getsubstrate.io -sSf | bash -s
When we are done, we don’t need this instance anymore and so, let’s save credits and the Earth by stoping it withgcloud:
gcloud compute instances stop instanceimage
Now we will create an image from our new instance source disk:
gcloud compute images create substrate-for-remote --source-disk=instanceimage
Now we should have the image available, you can check yourself in Compute Engine → Images.
Since we do not need an instance to build our image we can now delete it:
gcloud compute instances delete instanceimage
Now we have a few zones options, depending on where you are.
I’ve prepared this guide for the US and Europe:
Creating Compiling Instance
Now we get to the essential part. We will create an instance you may want to reuse with your team, using the image we just prepared.
gcloud compute instances create instanceedg --image=substrate-for-remote --machine-type=c2-standard-16 --zone=europe-west4-a --preemptible
If you are in the US, you may want to spin it closer to you on your continent.
gcloud compute instances create instanceedg --machine-type=c2-standard-16 --zone=us-east4-c --image=image-for-substrate --preemptible
You should get something like this returned:
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instanceedg europe-west4-a c2-standard-16 true 10.164.0.5 34.91.19.121 RUNNING
Once the instance is up it will print EXTERNAL_IP
, we will copy that output to the clipboard.
Because we are using ephemeral instances with no fixed public facing IP, there will be always different IP addresses.
When we are set, we can clone the Substrate project we want to use with cargo-remote
.
In our case it’s Edgeware.
git clone git@github.com:hicommonwealth/edgeware-node.git edgewareremote;cd edgewareremote
This is a magical one-liner that does most of the heavy lifting for us. Don’t forget to replace the IP shown with yours from your clipboard. Now it’s time to make some tea. It may take around 10 minutes. The next builds however, will leverage build-cache until you update your rustup toolchain.
cargo remote -c -r ybdaba@34.91.110.161 -e .profile -- build --release
Tada! The whole thing on a 16vCPU took us just 9 minutes, 11 minutes with binary delivery to the local machine. The next time you build it, it will reuse the build cache and partially synced repository and it will take around 3 minutes.
Great time savings!
My best time for the first build was around 8 minutes;
Finished release [optimized] target(s) in 8m 05s
After we are done, we can shutdown the instance to avoid costs, as that is one point of this tutorial 😇
gcloud compute instances stop instanceedg
Then you can run the Edgeware node once you are on the same platform as the compiling platform (Linux, x86_64)
❯ ./target/release/edgeware --dev
But, if you are not on the same platform as our compiling machine, we can use Docker here. In Edgeware, we have a custom script that does the job for us:
#!/bin/bash
cp .dockerignore .dockerignore.original
ln -fs .dockerignore.remotecargo .dockerignore
docker build -t cwl/edgeware -f remotecargo.dockerfile .
docker run -it cwl/edgeware --dev
It will link the original .dockerignore
for us, avoiding put the whole context to Docker daemon, building that image and running it with --dev
flag. It was an easier solution, as multiple .dockerignore files in same directory are not best practice.
You can run it now and watch your running local Edgeware node! 🎉
./build_and_run_devnet.sh
Alternatively, we can use docker-compose.yml
to run it — where the default is--dev
flag
docker-compose up edgeware
Here is whatdocker-compose.yml
looks like inside. You may notice we are using our previous local built image at cwl/edgeware
version: '3.6'
services:
edgeware:
image: cwl/edgeware
logging:
driver: "json-file"
options:
max-size: "100M"
max-file: "2"
volumes:
- ./data_edgeware:/data/chains
ports:
- "0.0.0.0:9933:9933"
- "0.0.0.0:9944:9944"
- "0.0.0.0:30333:30333"
command: ['--dev','--ws-external','--rpc-cors','all', '--rpc-methods=Unsafe']
That’s it! The major goal of this tutorial was to show you how to cut times to build an Edgeware node from an hour to a few minutes and save costs. If successful, you now have your local node for your very own experiments.
Author
I’m Matej and I’m taking care of developer relations at Edgewa.re. For more upcoming content follow @edg_developers on Twitter.
If you want to ask questions, feel free to show up at the Edgeware builders channel on Element or Telegram Edgeware Developers
Why use Substrate?
With Substrate, the runtime is your blockchain canvas, giving you maximum freedom to create and customize your blockchain precisely for your application or business logic. Within the runtime, you can compose any state transition function while utilizing built-in database management, libp2p networking, and the fast and safe consensus protocol GRANDPA.
Learn more about Substrate in recent Parity Technologies post https://www.parity.io/substrate-2-0-is-here/
What is Edgeware?
A self-improving smart contract blockchain
Edgeware is a high-performance, self-upgrading WASM smart contract platform, in the Polkadot ecosystem.
Participants vote, delegate, and fund each other to upgrade the network.
If you are curious about Edgeware, learn more at Edgewa.re
Follow for more updates https://twitter.com/@heyedgeware