LinkPool Development Update — Our Infrastructure

Jonny Huxtable
LinkPool
Published in
6 min readFeb 10, 2018

It’s been over a month since our last development update and since then, we’ve been very busy building the infrastructure which will power LinkPool for the years to come.

Most of the work to build our infrastructure has been completed. As a service, we’re ready to run our network of ChainLink nodes and we’re ready for those nodes to receiving staking through our contract.

What we’re most proud of, is that our network is ready to scale up to any size that is needed. We can scale up and down our network as demand increases or decreases, and we can reliably achieve as close to a 100% up-time as possible with our implemented disaster recovery solution.

The majority of this development update will focus on how we’ve built this infrastructure and why we’ve made the choices we have.

Regarding Fees

We’ve seen a lot of comments regarding our makers fees off the back of the previous that Mat Beale wrote here around our economics. Since the purpose of this article is to describe the infrastructure in detail, it should hopefully give some more context to why the fee has been set to the percentage mentioned in our previous article.

In addition, we’ve seen a lot of comparisons to typical mining pools. LinkPool will not be set-up like any current typical mining pool, this is due to us running and managing all the nodes in our network. As a end-user, you won’t be contributing any hardware to our node pool. You’ll be able to simply browse to our website and stake your tokens, nothing else is needed.

Due to our makers fees being the only source of income for the platform, it needs to cover our overheads while being able to generate income to fund the future expansion of LinkPool.

Our Network

As I wrote back in December in our first article, we are using AWS to host our infrastructure. If this is the first time you’re reading about AWS, it’s a cloud platform created by Amazon and it offers extremely reliable and scalable cloud infrastructure. Most of the services you know and love are ran on AWS, you just won’t realise it!

By using AWS, it allows us to set-up our infrastructure to be fully automated, fully self-managed and completely seamless while not setting any hard-limits on our capability to expand. We’ve leveraged all the great functionality within it to create LinkPool, and I’m excited to explain our solution in great detail!

We’re proud to leverage Amazon Web Services

Chosen Solution

We spent a lot of time testing different designs within AWS. Since AWS provides a lot of functionality, there’s many ways to achieve the same end-goal, but there’s a lot of different nuances between them. The service we finalised and agreed on using, is AWS ECS.

What is AWS ECS?
AWS ECS is a container service that allows you to run your container based services in a production environment. It’s very highly scalable and high-performance container orchestration service. In a more simple sense, it’s the service which allows us to scale up as demand needs, and allows us to keep as close to 100% up-time as possible.

Architecture

Here’s a diagram which shows the set-up of our nodes within AWS:

LinkPool High-Level Solution Design

The above diagram shows how a single LinkPool instance will be constructed within AWS. Each instance consists of two full ChainLink nodes, with each node being able to simultaneously process requests. This is what provides us with our disaster recovery solution, and will give us as close to 100% up-time as possible.

Our solution achieves that due to the AWS managed container service (ECS). ECS will instantly detect if either of the two nodes in the cluster are unhealthy. When it does that, AWS will disconnect that node and then create a brand new copy of that node in seconds.

Although, if it did detect that a node was down, the node wouldn’t be impacted and wouldn’t loose any LINK tokens in penalties. This is due to the other node facilitating all the requests while the other node is in a unhealthy state. 100% up-time achieved (or as close to)!

Keep in mind that this has been built to the Ruby node specification and the services which run inside each instance will change upon the Go release.

Node Upgrades

One of the main aspects of the LinkPool service that determined which approach we took, is being able to roll out upgrades to the nodes without taking the node offline.

This is important for a variety of reasons:

  • Being able to add external adaptors and new data sources to all the nodes at once.
  • Upgrading the node software to the latest version across all the nodes simultaneously.

Our solution handles all this effortlessly and without any downtime. It achieves this by employing something called ‘rolling restarts’. When we upgrade the version of the node, the following will happen:

  1. AWS creates two new nodes with the new node version
  2. Waits until the new nodes are available, and checks if they’re healthy
  3. Sets the two new nodes as the primary nodes
  4. Disconnects the two nodes running the older version

With the above, we won’t see any loss in LINK tokens staked as penalties, as all the requests assigned to our nodes will be still be processed while the above is on-going. This process also only takes around 2–3 minutes!

Node Scaling

One of the most important aspects of LinkPool is being able to scale our network to meet the demand of the people who’re wanting to stake. We need to ensure that we can ramp up capability and create extra nodes once there is no nodes left that are available to be staked on.

With the approach we’ve taken, this is all but one click of a button away (or one API call away). If you refer to the diagram above, the LinkPool Node consists of Node A and Node B. If demand needs another node creating, we will call an API which will spin up a fresh copy of a LinkPool Node with a brand-new wallet to be staked on. This node will add itself to the contract automatically, and then appear on our dApp to be staked on in a matter of seconds.

One question I’ve been asked before is “What if LinkPool gets too big?”. Personally, I can’t ever see that happening, but we have planned for the scenario. Even with the registered interest so far (totalling over 11mil+ LINK), if we assume a 100k limit per node, that’s a 110 nodes. That quantity of nodes will be a small subset of what will be deployed to the ChainLink network upon main-net release. In Sergey’s last update, he voiced that 19,000 people want to run a node.

Although, if we ever did get too big to the point that the integrity of the network will suffer, we will block any auto-scaling of the network and will only re-enable it once there’s been sufficient node capacity added to the network.

Summary

I hope that article has helped in understanding how LinkPool will be ran in production and how we’re striving to make a network of nodes which will be a key part of the network for years to come. Also, I hope that the breakdown of our infrastructure gives more understanding to why our makers fees are set to as they are.

As always, if you have any questions at all, feel free to speak to either me or Mat on Slack, Telegram or Reddit. If you’ve not checked out or website or twitter yet, please do!

--

--