Substrate Blockchains and Runtime Modules: An Introduction
Get started developing with Substrate and runtime modules
This piece aims to cover how to start building custom Substrate chains that can support your own Runtime modules:
- We’ll run through a Substrate chain installation and setup
- Explore Substrate chain configuration and how to browse chain state using Polkadot JS
- Dedicate time to introduce the structure of a Runtime Module, a means of adding functionality to your chain
Developing on Substrate
Substrate and the coinciding runtime modules are developed with Rust, a statically typed language that offers speed and reliability with its memory safety features.
This is a subject often brushed aside in blockchain development, but is vitally important for adoption: how to tackle Rust. We won’t be analysing Rust code in this piece, but this will be a pre-requisitive concern for developers interested in Substrate.
How to tackle Rust for newcomers
Rust’s learning curve is on the steep side for a programming language. That can be attributed to some of the syntax conventions it adopts, with heavy reliance on features like generics, traits, lifecycles, and macros, among other considerations such as scope and mutability.
Other Substrate learning resources such as the Substrate Kitties collectibles workshop claim that Rust is quite easy to get to grips with, but this is not the case unless you have had moderate experience working with low-level languages, such as C++, having been exposed to more granular APIs that deal with system-level management.
Concretely, if you are new to Rust, be patient. Take time to understand the concepts and features the language offers, and enjoy yourself along the way. Substrate development will become a more enjoyable endeavour. The Rust book is a well-written walkthrough for both newcomers to the language and experienced users looking for a Rust refresher.
Q: Will the Rust book alone get you up to speed with the language features and concepts that Substrate adopts?
A: Yes it will, but we recommend practicing with your own demos as you read through the book to solidify your understanding. This will make coding in Substrate a lot more comfortable.
With this in mind, let’s explore some practical Substrate setup and usage, before moving into runtime modules.
Installing Substrate simply requires calling one bash script hosted by Parity at
getsubstrate.io. Substrate comes in two packages:
1. Fast installation
A faster Substrate installation that installs a pre-build Substrate development chain, along with Substrate Scripts, a command-line utility for configuring custom Substrate chains and runtime modules.
Run fast installation via the
--fast flag used with the Substrate install script essentially skips the installation of some utilities that are not compulsory to run a Substrate:
curl https://getsubstrate.io -sSf | bash -s -- --fast
This will fetch all the dependencies Substrate requires, including Rust, OpenSSL, LLVM and more, and install them if they haven’t been already.
2. Full installation
A full Substrate installation installs all the above, as well as two other utilities:
Subkey: a utility that generates or restores Substrate keys (useful for managing accounts via the command line)
Substrate node: A pre-configured Substrate node that can connect to the Substrate test-net.
Run the following to install these tools along with Substrate Scripts and the development node:
// full Substrate installation
curl https://getsubstrate.io -sSf | bash
Once the install script is finished, update your cargo environment in order to call the newly installed programs:
// update env (alternatively, reboot your system)
A compiled Substrate node will now be accessible via the
substrate command. To verify Substrate and Subkey installed correctly, check the versioning of both programs,
Note: The Polkadot JS app (which we cover further down) has implemented subkey in its account management UI — with the option to manage accounts in the browser, some users will not need subkey. This may contribute to why it is an optional utility.
All being Rust compiled binaries, the Substrate and utility tools will now be residing in the
~/.cargo/bin directory by default. Check yourself what has been installed:
// list installed cargo binariescd ~/.cargo/bin
You’ll notice that along with the
subkey, we also have the
substrate-module-new binaries available to us. We’ll be using these further down to generate a new custom node and module.
Note: The other newly compiled binary is
substrate-ui-new, a tool for cloning the front-end React app for managing Substrate chains. This flagged an error when I attempted to run the app, therefore we will assume the Polkadot JS / Substrate UI app — another Typescript & React based app designed to configure and manage Substrate and Polkadot blockchains — is the preferred method of managing a chain.
Full Substrate installation instructions, covering instructions for a range of operating systems, can be found here.
Find more information on using subkey here.
Updating Substrate Scripts
Updating Substrate scripts (pulled from the official docs) require cloning the latest version and replacing the cargo binaries with the following command:
git clone https://github.com/paritytech/substrate-up $f
cp -a $f/substrate-* ~/.cargo/bin
cp -a $f/polkadot-* ~/.cargo/bin
We have our needed Substrate tools installed and ready to use. We can indeed run Substrate now, via the pre-built development node. This node will begin producing blocks but will be of little use for development purposes.
Note: Substrate developers use this pre-built node, accessible via the
substrate path, for development purposes only, testing their latest builds and playing with configurations. For your own Substrate projects that will have their own Runtime modules and chain configurations, we’ll be compiling a custom node. This entails cloning the Substrate source code and building our custom node. We’ll cover this further down.
In any case, we can verify the Substrate dev chain is working with this command:
Your node will be running in the Terminal, and blocks will start being produced. In order to see more information about your chain, such as its state for each supporting module, we will turn to the Polkadot JS app.
For a breakdown of
substrate command-line options, check out the
Familiarising ourselves with help output is a tried and tested way of getting to know the capabilities of command-line programs. This build contains some useful flags for testing, such as pre-configured accounts for
--bob, etc. The
--light flag runs your node as a light client, with light client support built right into the framework.
These and other flags make it simple to toggle chain configs, mostly used for development purposes.
Note: You can even define your own command-line options with your custom Substrate node. Just remember to update the
--help output! Rust has extensive tooling for the command-line and is my personal favourite language for command-line utility development.
Before endeavouring into custom builds, let’s review how we can inspect a Substrate chain. The primary means of doing so today is via the Polkadot JS app, developed with Typescript and React.
Using the Polkadot JS App
The Polkadot JS app acts as a basic chain explorer and provides APIs and interfaces to manage Substrate modules. As the name suggests, it also supports Polkadot chains. We have two options for using Polkadot JS:
- Use a Parity hosted app, at https://polkadot.js.org/apps
- Clone the project’s repository and run locally on your machine
Let’s clone the project and run it locally. Once installed, we can point the node endpoint to our Substrate chain, which will be another running process on your machine.
// clone and start polkadot.js appgit clone https://github.com/polkadot-js/apps.git
mv apps polkadot.js
cd polkadot.js && yarn start// start substrate dev chain in another terminal windowsubstrate --dev
Once running, go to
localhost:3000 in your browser. To connect the app to your local chain, navigate to Settings in the side menu of the app and switch to your Local Node endpoint, similar to the following setup:
Save & Reload, you will notice many other side menu links now present. These links will vary depending on which features your chain supports.
Let’s take a look at something that almost all chains will support — accounts. Head over to the
Accounts section of the app— you will see a list of accounts under the “My Accounts” tab. These are pre-configured accounts, with pre-configured balances. We can also send funds between accounts, delete accounts, and back up account keys. This highlights what the Polkadot JS app is for — interacting with your Substrate chain at a UI level.
Before moving on, we’ll mention a few more things the Polkadot UI can do, just to highlight some of its functionalities:
Note: I have written a dedicated article series introducing the Ink smart contract language for Substrate here.
- As a proof-of-stake consensus blockchain, the Staking section allows you to stake funds to become a validator of transactions, with support for storing those staked funds via a “stash account”, an account that can stay offline, or store those funds in cold storage. Nominated validators, staking rewards can also be viewed — and of course, you can withdraw funds from a staked position
- The Democracy section is designed to handle executive votes and governance of the chain
- The entire Substrate JSON RPC (the means of contacting Substrate externally via a range of endpoints) can be tested in the Toolbox section, whereas administrative changes to the chain can be carried out in the Sudo section
You may be wondering where those pre-configured accounts from the Substrate dev chain came from, and rightfully so. These accounts, and much more, including a “blob” of compiled code for the runtime logic itself, is stored in a “Chain Specification” file, also known as the chain spec.
Chain Configuration With a Chain Spec JSON File
A chain specification is one big JSON object, generated via the
substrate build-spec command. This command works by referring to your node’s imported runtime modules and looking out for exposed configurations that need to be defined. These will either be null values or have a default value provided.
In essence, runtime modules can rely on “genesis configuration”, in other words, configuration we provide when the blockchain first initialises and constructs its state. This state is initiated via the genesis block — the first produced block of the chain. The chain spec JSON file’s job is to define this initial state.
Once generated, we can open this chain specification and amend any values we deem necessary, before running the node.
Note: What if our chain writes state we no longer want? Maybe we have updated a module, or any chain config, and want to reflect the changes from the genesis block? We can purge the chain — remove the block history — effectively resetting the node.
Your node provides the
purge-chain command for doing this, and is commonly used in development workflows:
# optional --dev flag to specify development chain
substrate purge-chain --dev
On top of this, there are three default chain “specs” — pre-filled configurations — that the framework provides, that define some base values based on whether we are running the node for testing or production purposes.
In fact, there are three specs provided by default:
devspec is the furthest specification away from a real-world use case, primarily configured to help you play with your chain. You’re provided with a range of accounts and configuration is provided for all of the pre-packaged runtime modules
localspec is similar to the
devspec, and is used in the Private Network Substrate tutorial, hosted by Parity. It gives multiple accounts “authority”, assuming you will want to test multi-user scenarios locally
stagingis a more conservative spec, defining a limited number of accounts, and leaving out module-specific configuration. This is the spec you’ll opt for when building your production chain.
What we can do is build a new chainspec based on one of these provided options with the substrate
build-spec command, outputting the result to a separate file. If I want to copy the dev chain spec for my own chain, I can run the following command, outputting the spec in a new
my-chianspec.json file in my home directory:
substrate build-spec --chain=dev > ~/my-chainspec.json
Open this file in your editor to see the configuration options available.
Note: You may wish to collapse the
genesis.system.runtime block, containing a huge unreadable blob of your runtime.
id field can be modified to your own name, along with the
name field, a more human-readable name of the chain spec. Some notable common options include:
- Telemetry endpoints: Provide endpoints for a Telemetry service via
telemetryEndpoints. This will allow you to populate a UI with connected nodes, not dissimilar to the Polkadot Telemetry.
- A list of
balancesto give initial accounts, in the event your node deals with a native token.
stakingconfiguration in the event your chain supports the feature. Configure options such as the initial validator accounts, validator and stashing variables.
Ultimately, there will be various other variables depending on which modules your runtime includes. An advised approach to help you get familiar with the chain spec file is to refer to the main Substrate node chain spec.
Once you’re happy with your chain spec, we then process it into a raw encoded state, with the
substrate build-spec --chain ~/chainspec.json --raw > ~/mychain.json
And finally, to run our chain we provide substrate with this chain spec. From here we will assume that you are running a custom compiled Substrate node, where we replace
<node_path> with that node’s runtime.
<node_path> --chain ~/mychain.json --validator
Note: The validator flag is required for your chain to start producing blocks.
Next let’s take a look at initialising and compiling custom Substrate nodes.
Initialising your own Substrate Node
So far we have been using a compiled Substrate node obtained via the installation script. This is great for testing a development chain, but it limits us when introducing our own runtime modules and custom configurations. For this, we need to download the Substrate source code.
We have two means of getting started with our own custom Substrate chain:
- Downloading a readily configured node template (such as the test-net node template downloaded in the full installation from the first section, or the Substrate Kitties node that particular workshop provides).
- Using Substrate Scripts, download the official node template, a bare-bones Substrate node script with a runtime template ready for hacking. We can do this using Substrate Scripts, which we also installed earlier.
Just about all Substrate projects will start from the official node template unless you are following a workshop or extending an existing project.
Generate a new node template with the following:
// substrate-node-new <node_name> <author>
substrate-node-new my-node "Ross Bulat"
This may take some time depending on your system — the latest Substrate source code will be fetched and compiled.
Once completed, the node’s runtime will now be editable within the
runtime/src folder via the
lib.rs file. Also included is a template for a runtime module, with
template.rs. We will take a look at a runtime module next.
Building your custom node
Inside your node directory, compile your node into wasm with the included
build.sh script, before compiling a binary with cargo:
# build wasm
./scripts/build.sh# build binary
cargo build --release
Your node will now be compiled in your node’s
Where we have been using substrate to call node-specific commands, we can now refer to our newly built binary, to run commands on our custom-built chain. To purge the chain and re-run it, we’d use the binary like so:
# clear chain state
./target/release/<node_name> purge-chain --dev# run in dev mode
The last subject we’ll introduce here is runtime modules. Let’s explore what they are and how to include them in a Substrate node.
Introducing Substrate Runtime Modules: Plug-in Blockchain Features
What gives Substrate practicality is a generic and modular structure that allows developers to plug functionality into their runtime, thus creating a custom blockchain that fits their requirements.
Note: Another term for the Substrate runtime is the State Transition Function, or STF. This is essentially the function that executes blocks, resulting in state changes to your blockchain.
These packages of functionality are called modules, or more specifically, Runtime Modules. These range of Runtime Modules that come pre-packaged with Substrate collectively form a catalogue of modules, called the Substrate Runtime Module Library, or SRML.
These modules are extremely useful. They add functionality for a range of features we’ve come to expect from other Blockchain frameworks, and they’re available to browse through on Github. Having these modules readily available saves developers from re-inventing the wheel and re-implementing them — and where entirely new features are implemented, they too can be developed as Runtime Modules.
The SRML modules have been maintained as Substrate has been developed, also making them reliable. Reliability is another key advantage of Runtime modules — maintaining them becomes more realistic as they gain adoption.
Here are a couple of modules available today:
- Assets: A module providing support for fungible assets — think ERC20 tokens.
- Balances: A module providing support for managing account balances.
- Staking: A module providing functionality for managing funds at stake by network maintainers.
You will notice that each of these modules are formatted as a Rust crate, designed to be imported into the Substrate runtime environment.
Note: Parity has created a lot of crates to get Substrate to where it is today. Check out the index of their crates library, at crates.parity.io. This documentation is actually auto-generated, using a tool called rustdoc — more documentation on that here.
Each of the SRML modules are packaged as crates and are prefixed with
srml_ before the name of the module, each of which can be found in the left menu of Parity’s crate library.
Overview of a Module Structure
Each module is defined in its own
src/lib.rs file, conforming to a specific structure. Already we can see the high-level characteristics of a Substrate module:
- A module is commonly its own crate, but does not have to be
- A module can be defined as one file,
module-name.rs, or more commonly
lib.rsif the module is a crate. A module can also have other supporting files, often all residing in a specific directory.
- A module must conform to a particular structure, relying on Specific Substrate APIs
What does this last point actually mean? Well, it depends on what your module actually does. A module provides functionality to your blockchain — that much we already know — but this functionality can come in the form of a range of components:
Events: A module can define custom events to be called when certain criteria are met — perhaps a
TokenCreated event when you mint a new non-fungible token. Events are wrapped inside a
Storage: A module can define data structures to persist on-chain, such as mappings, lists, and so on. We can actually store a range of data types, most of which are documented here. Storage items are defined within a
Dispatchable functions: Public functions that can be executed at runtime via a JSON RPC call. All dispatchable functions include an
origin argument, containing information about the origin of the call to the function, such as the public address of the caller, and other metadata.
If we look at the
Assets module dispatchable functions, we can see that
destroy are defined for us. Dispatchable functions are called via accounts. We’ll get into managing accounts using a specific tool further down
Public or private functions: Modules can provide public functions that can be called from anywhere in your runtime environment, as well as private functions, that can only be called from within the module’s implementation. Neither of these are dispatchable functions, e.g. They cannot be reached externally via the JSON RPC protocol, and do not require the
Structs: Modules can define structs that may be required for that module. E.g. Perhaps a
ShipmentItem struct would be defined for a chain used to track global shipments:
Note that we can bring other types into the struct, as we have done with
Balance in the above example.
Standard types, such as
Hash, are defined in the runtime primitives library, but types are also commonly defined in other runtime modules, where we introduce the concept of dependencies for modules.
Modules can also be dependencies
As we have already discovered, modules can be crates, and therefore can act as dependencies within
Cargo.toml. This ensures there will be no missing modules that others depend on.
Going back to the plug and play analogy, modules are defining pluggable (imported) features that can be played (executed) inside your runtime, giving your chain additional functionality. This pluggable aspect simply means declaring and importing your required modules into your Substrate runtime. We’ll visit this further down.
With the conceptual understanding of what a Substrate module actually is, let’s use the
substrate-module-new utility to generate a bare-bones module template.
Initialising a new module
substrate-node-new utility, we also have downloaded a
substrate-module-new utility, that pulls the latest module template for us to work with.
Within your node runtime directory, prepare a new module with the following:
Let’s run this with a name of
my-module. The output will prompt us to add the module to our
substrate-module-new my-module> SRML module created as ./my-module.rs and added to git.
> Ensure that you include in your ./lib.rs the line:> mod my_module;
The resulting file,
my-module, will be identical to the
template.rs file that was originally included in the directory. However, running
substrate-module-new is the preferred way to initiate a new runtime module in the event
template.rs has been edited.
From here we can go ahead and develop the module, wrap it in its own crate, even distribute it on Github where other developers can maintain or contribute to its development. This is the inherent power of Substrate modules, and will undoubtedly aid in development efforts as module libraries are published by developers from various fields.
Read more about importing a module into your runtime on this documentation page.
Note: I will be publishing more syntax driven insights on module development in the future.
This introduction on Substrate has covered how to install the framework, along with using the included tools to aid in deploying custom nodes and modules.
We visited the Polkadot JS app to see how it works as a Substrate management utility, acting as a chain explorer and manager at the same time. The app aims to be generic and not assume anything about what your chain supports — the UI of the app will update depending on your chain spec and what modules you have defined within your runtime.
On the subject of chain specifications, we covered how a specification JSON file is generated via
substrate build-spec, the contents of which will vary depending on the modules your chain’s runtime will execute. There are also three pre-configured chain-specs that fill in specifications for either a development or production node. A chain specification can be edited before compiled into a raw state and used with your runtime.
We also explored runtime modules themselves, that they are commonly one file bundled into a cargo crate. Runtime modules need to adhere to Substrate APIs, that define various components such as events, storage, and functions, all of which will become available to your runtime upon importing the module.