Terraforming Load Balanced Multi-Region Hyperledger Besu Nodes on Azure
Hyperledger Besu is an enterprise-ready Ethereum client developed by PegaSys. It can be used for both private networks and public networks such as
mainnet. Its main advantages over traditional Ethereum clients (such as Go Ethereum and Parity) are:
- Its support for multiple consensus algorithms including PoW (Proof-of-Work) to allow it to operate as a regular Ethereum client and PoA (Proof-of-Authority) such as IBFT (Istanbul Byzantine Fault Tolerant) and Clique for more advanced scenarios.
- Transaction privacy between parties and node.
- Node and account permissioning on the network.
- Modular client architecture, improving and simplifying development and upgradability in future versions or forks of the codebase.
As a provider of a blockchain based solution, to ensure high availability and redundancy, you would probably want to deploy multiple nodes on different regions and distribute the traffic between those node depending on load and failures.
Such deployments can be automated with a tool such as Terraform, which enable a concept called Infrastructure-as-Code (IaC), meaning, we can describe an entire deployment using scripts that define the components of our infrastructure and how those components related to one another.
As we will be using Terraform to deploy to Azure we will need the following:
Once we have both tools, we can start working on our scripts.
A repository with a complete example of deploying three Hyperledger Besu nodes on three different regions in Azure with an Azure Traffic Manager that routes traffic to those nodes can be found here:
Contribute to cladular/hyperledger-besu-azure-terraform development by creating an account on GitHub.
To keep our deployment scripts organized and as simple as possible, we will break the scripts in to composable modules. Each module needs to have its own folder and in it the
main.tf file, and if needed,
- We will start by defining a module for creating resource groups, which accepts a name and location and returns the name of the created group (this allows as to do some internal manipulations on the name if we want to):
- Next, we define a module for creating traffic manager profiles, which accepts a name and a resource group name, and returns the generated name and the fully qualified domain name (FQDN):
- Now that we have that, we can create the Azure Traffic Manager’s “entry point”. We need a module for creating traffic manager endpoints which will be used by Traffic Manager to communicate with our nodes. It will accept a name, a resource group name (where the Traffic Manager profile was placed), a profile name (the name of the Traffic Manager profile) and a target (the address of the node this endpoint will point to):
- We then define a module creating an Azure Container Instances resource that starts a Hyperledger Besu node. It will accept a name, a location, a resource group name (in which the resource will be created) and the external host name that will be used to access the node (the FQDN of the traffic manager profile). The module will return the FQDN of the created resource, so it can be later passed when creating a traffic manager endpoint:
Note that we will be running the Hyperledger Besu node with
--rpc-http-enabled (to allow wallets to connect to the node),
--rpc-http-cors-origins=* (some wallets pass
Origin: null header which result in an error returned from the node if not all origins are allowed) and
--host-whitelist=<EXTERNAL HOST NAME> (to allow the nodes to respond to requests with the host name of the traffic manager).
- Our final module will be a composition of three modules we defined in previous steps, which will pack in a single reusable script a deployment to a single location. It will accept a location, a profile name (Traffic Manager), a profile resource group (taffic manager’s resource group), an external host name and a deployment name (a unique name for our entire deployment, more on that later):
The Main Script
Now that all our modules are ready, we will create our main script (in a dedicated folder called “example”, in our case). It will define:
- Define the Azure provider with a specific version
- A resource group for the Traffic Manager
- A Traffic Manager
- And three-node deployments
And return the FQDN of the Traffic Manager, so we can copy it and use it to test our environment.
Note the definition of the local deployment_name with the value of example. This value, as it is being used as part of the FQDN of some of the resources, must be a value that will not cause errors related to to already existing FQDNs.
Run The Deployment
Running our scripts only requires executing the following three commands:
- Login to Azure:
- Initialize Terraform’s working directory:
terraform init <PATH TO MAIN SCRIPT FOLDER>.
- Apply our scripts to our Azure subscription:
terraform apply <PATH TO MAIN SCRIPT FOLDER>(you will be asked to type
yesto actually execute the operation).
Once the deployment completes, you should see the FQDN of the Traffic Manager at the end of the output text. We can use that FQDN to construct a URL that can be accessed by any Ethereum-compatible wallet software to ensure that everything works.
Once we are done with our example, we can run
terraform destroy <PATH TO MAIN SCRIPT FOLDER>, to remove all the resource that were created by the
The above example is a very basic one. As Hyperledger Besu can be deployed in various scenarios, some will increase the complexity of the deployment (such as permissioned nodes configuration). Additionally, a production-grade deployment would likely also include a firewall, private virtual networks for the nodes, more than one node in each location and other parts that will also make the deployment more complex.