LinkPool Progress — Contract Upgrade, DEX and our Production Infrastructure

Jonny Huxtable
LinkPool
Published in
9 min readOct 26, 2018

Important: If you hold any LP tokens, it is important to read and understand the contract upgrade section of this article.

It has been a while since I wrote a more generalised development update on LinkPool, mostly because of how busy I and the team has been, especially with the run up to our Ropsten launch. Although, this update will detail our contract update in preparation of the DEX launch, how the DEX will work and then our infrastructure design that will be used to serve Chainlink requests on go-live!

Contract Update

In preparation of the DEX launch, we’re upgrading the PoolOwners contract that is currently live on main-net.

We will be taking a snapshot of the current contract data for the upgrade on 29/10/18 at 12pm UTC, any transfers of tokens post that date & time will NOT be reflected in the new contract.

The sole reason for the upgrade is gas costs. With the new upgrade, we’ve got gas efficiencies of up to 60% in all areas, the main focus being on our token claiming cycles that distribute LINK tokens to all the holders.

When we launch the DEX, we expect that the amount of unique addresses that hold the LP token will drastically increase, and with an increase of unique holders means a linear increase of gas costs for claiming of tokens. By optimising the gas costs now prior to launch, this will future-proof the contract so the linear increase isn’t as large. If we didn’t do this, it would result in a far larger wait period for each distribution cycle as the larger gas costs would mean we’d have to wait until more LINK was to be distributed to make it more worth-while.

If we performed an upgrade post-DEX launch, we would have to turn the DEX offline and hope no-one uses the contracts directly. I fully intend that this upgrade will be the last, as I personally can’t see any more room for improvement in-terms of gas optimisation with the new PoolOwners version. If any Solidity developer is reading this and does see any room for improvement, get in-touch.

To detail the gas improvements:

  • Pre-upgrade cost of batch claiming: 1.9~ million gas for 43 wallets, resulting in 44.1~ thousand gas per address.
  • Post-upgrade cost of batch claiming: 761~ thousand gas per 43 wallets, resulting in 17.6~ thousand gas per address.

That’s a gas saving just shy of 60%!

For developers, here’s the code snippet of the new claiming process:

New Claiming Process in PoolOwners

The main gas optimisations came from data-packing each holder into a single 256 bit unsigned-integer. In addition, we drastically cut the amount of SSTORE op codes being executed during the process, by only writing to storage variables on the completion of a batch claim, rather than for each claimant.

This does have the trade-off that individuals won’t be able to solely claim their tokens on the dashboard anymore, rather the only way of claiming is the current “Claim for all” as seen on the current dashboard. We see that as an acceptable trade-off considering the savings.

DEX

We recently tweeted a picture of our up-coming decentralised exchange:

Our working DEX, running locally.

The main reason we shifted priorities to work on a DEX was the amount of attempts of transfer without a trustless mechanism. Since all these trades relied on trust of both parties involved, we quickly shifted to work on this to avoid any one getting scammed.

For the reasoning as to why we can’t confide to the ERC20 standard so it can be trading on existing DEX’s, view the README of the PoolOwners repository.

The DEX will only offer support for our LP owners token with a LP/ETH pair. There is potential in the future to expand this to be a fully expansive DEX that can support ERC20 tokens, but we don’t see that as a priority right now.

The DEX uses an wallet-to-wallet trading mechanism. What is meant by that is you don’t have to deposit your ETH/LP tokens to raise a order, rather when an order is raised, it’s locked up from your wallet. When that order is filled, the LP tokens/ETH is sent directly to your wallet address and there’s no extra manual withdrawal steps.

The only caveat is that fees do need to be deposited in the contract prior to trading. The fees will be taken in LINK, and what is earned will be sent to the PoolOwners contract to be distributed to token holders as any makers fees/revenue from staking/NaaS will be.

The structure of the fees are as followers:

  • Makers fee: 50% LINK
  • Takers fee: 50% LINK

This results in each order receiving 1:1 in LINK based on the ETH value of the order. For example, if you raise a sell order for 2 LP tokens at 1 ETH a token, then 2 LINK will be taken through fees. 1 LINK to be fronted by the maker of the order, 1 LINK fronted by the taker.

At the current LINK & ETH price at time of writing, this results in the USD percentage value of the makers/takers fees being as follows:

  • Makers & Takers fee: 0.21%

If there is any large price fluctuations that cause that percentage to largely change, whether up or down, it will be updated. The maximum the contract will allow is 50% makers & takers fees in LINK.

We are using geolocation in the DEX to identify any US-based traffic. If you are identified, a disclaimer will be shown asking you to accept that you’re not a US citizen. The same rules apply from our crowdsale, we do not condone or support any US citizens holding the token/share.

To conclude on the DEX with a final point, there is no admin functions within the DEX contract that can allow us to remove/block orders of any kind. There’s only the ability to change the fees, something that we want to move to our DAO so the holders can vote on what they should be in the future.

Metamask Update

As of 4/11, there will be a breaking change in Metamask that disables websites being able to access your Ethereum wallet address by default. Our app has been updated to support this change, but users need to be on version v4.14.0 as a minimum requirement, or the app will no longer be compatible. More information can be found here.

Production Infrastructure

With our early development updates, we wrote a lot about how our infrastructure would be built. Since that has all been developed in its entirety, I’d like to take this opportunity to describe and detail the design, boasting its flexibility, security and operation-ease.

Terraform — We are huge supporters of the technology.

Use of Terraform

I’ve frequently stated that we’re using Terraform as the main bootstrapping technology for our infrastructure. Personally, I love Terraform, it’s a fantastic stack that is maturing every month and allows very deterministic and descriptive code that defines infrastructure and supports modularity.

The platform we’ve built is all Terraformed. This includes the networking, every server, load balancers and backups… you name it, we’ve built in TF code.

To show a snippet of our nodes module that we use to bootstrap our Chainlink servers:

Terraform Module for our Main Node Cluster

To describe the above, the main_nodes module specifies its source from of our internal GitLab repository. All the variables then specified are parameters to that module, telling it exactly how it should be set-up. To give a list summarising those parameters:

  • Node Count: That’s the number of nodes in total to be ran for that cluster. If we change that to “2” and execute the code, it will automatically provision us a new node with the exact same configuration as the last.
  • VPC: The network that the nodes reside on. This gives us the flexibility to run on completely different networks based on the Ethereum chain ID. For example, our Ropsten nodes can’t communicate with our main-net nodes.
  • Hosted Zone: The Route53 HZ to place all the DNS entries in. Giving us the option to either have purely internal DNS for the nodes, or expose them publicly (something we’d do for NaaS).
  • SNS Topics: The SNS Topic in AWS-land are the receivers of any alerts that may be triggered. For example, if we hit any 5XX errors on the nodes or if the CPU usage on the nodes is too high, we will be called out via Pagerduty. Pagerduty records and tracks any incidents, giving us the ability to define notification rules and escalation processes.
  • Ethereum Chain/WS URI: The Ethereum clients that our nodes will point to and the chain ID that they’re on. The parameter for this is an output from our Parity module that returns the load-balancer FQDN to be used to construct the full URI.
  • Node per Service: Each one of our node runs in its own ECS service. We can then increase fail-over capability of those nodes by increasing that number. This is currently set to “1” as our main nodes don’t undertake any jobs and are just monitored. Although, our Ropsten nodes are set to “2”, giving us one master and one slave node. If we increased that to “3” we’d automatically get one master and two slave nodes.
  • Node Version: The container version of the Chainlink node. We’re currently using our own custom built containers hosted internally. This is due to the need for a custom entry point that retrieves passwords from secret managers and decrypts KMS binary blobs. If we update this version, it will automatically trigger a rolling upgrade across all nodes, with absolutely no-downtime to the nodes (only the time it takes for BoltDB to grab the DB lock, which is pretty instantaneous, <500ms or so).

From the above code snippet, you’ll notice that all these parameters are variables that are pointing to outputs from other resources or module. In practise, this means any changes that happen elsewhere in our infrastructure are then cascaded to any other resource that it depends on. Always keeping everything completely aligned and controlled.

The exact same module is used for our Ropsten cluster of nodes, with only parameter changes. This gives us the absolute finality that we know each cluster even though on different networks, are exactly the same. Perfect for testing and creating new environments.

The above snippet is only a tiny sample of the descriptive code we’ve built to create and manage our infrastructure. We have far over 10,000 lines of code in Terraform that manages everything, including our internal systems.

Operation Deliverables

It’s fine to define the way we’ve built our infrastructure, but what about the benefits it gives us in running practise with the way it’s defined? Well, to summarise that, we boast the following:

  • High-Availability of all our services: Nodes, ETH clients, external adaptors, GitLab, Jenkins. All these services are load-balanced and configured to never be down.
  • Templated Parity Instances: We can go from a clean server to it being fully synced on main-net and live in the cluster in 20 minutes. Allowed by our automatic bi-daily chaindata templating and userdata scripts.
  • Backups: Our nodes and Parity instances are all backed up and can be restored if any disastrous event happens in minutes.
  • Deterministic Upgrade Processes: The biggest cause of any issues is human-error when it comes to managing servers. Since ours is all described by code with tested processes, upgrading versions is as simple as changing one variable.
  • Network Segregation: Only the services that are meant to talk to each other can do. We don’t and will never have any 0.0.0.0/0 entries in our security groups, and the only way to gain access to each service is via our administrative VPC that is only accessible via an IP restricted VPN.
  • Role Based Access Control: Strict RBAC roles are enforced by only giving any internal team members access to AWS via CLI/API’s. Team members can only read/update what they’re meant to via code defined policies.
  • No Plaintext Passwords: All secrets for the node API’s and wallets are all stored either in encrypted KMS blobs or in secrets manager. There is no plaintext passwords to be seen in any scripting or code in our internal repositories, not even on the servers themselves.
  • DDOS Protection: All our public facing services are enforced by web application firewalls that restrict any large volume of requests from single sources.

With the above points, while there being many more, is an example of the effort we’ve gone to in providing enterprise-grade Chainlink instances that will be offering and supporting very high-collateral jobs in its data market. We see it as the first-of-its-kind for the project, paving the way and setting an example for anyone who comes after us. Chainlink needs people to support it by creating nodes as-if it was a gold standard service in any company, and fully believe we have achieved that.

Conclusion

Thanks for taking the time to read the article, we hope we’ve shown our intention and dedication to being pillars of the Chainlink network, providing data to contract creators (along with many others!) for the futuristic and exciting data market that it provides.

As always, for any questions you can reach out to us via the following channels:

--

--