Aeternity: Bitcoin-NG the way it was meant to be

Michal Zajda explains how the fusion between æternity and the Bitcoin-NG protocol delivers a UX and technical leap in everyday blockchain.

This post was written by Michal Zajda, Blockchain Architect for æternity.

æternity is a completely new platform-oriented blockchain. The æternity team designed the protocol and implemented a reference full node. æternity also builds a whole ecosystem of æpps. Among the features that we envisioned as a means to make it a next-generation blockchain are protocol-level oracles, state channels, a powerful and secure VM, a naming service, and a fast consensus protocol.

This post describes the journey we took to implement the system around blockchain consensus to achieve our goal.

The system was based on Nakamoto consensus and Bitcoin-NG protocol for achieving orders of magnitude better performance in transaction confirmation latency.

æternity: the beginning

æternity is designed and written from scratch. The current code is based on a version of the prototype from the late summer of 2017. The reference full node is implemented in Erlang/OTP. The æternity team comes from the world of distributed programming, experienced in building financial systems, massive chat applications, online markets, and a whole range of frameworks popular in the Erlang community. This technology is born to handle and serve applications created for distributed systems.

This technology is born to handle and serve applications created for distributed systems.

What’s on board

The plan was to take a leap and propose a smooth experience to both users and developers of the blockchain by including the most essential layer-2 solutions in the base protocol. The range of native transactions available from standalone rest calls and contract calls covers, among others, the following functionalities:

  • anyone can register an oracle, propose a request-response format, pricing, and provide data for on-chain actors, including contracts
  • any hash-based address can be registered as a human readable string and used to compose transactions.
  • to help scalability and privacy, we introduce complete off-chain state channels. Implementation covers a powerful finite state machine to handle the complex lifecycle of state channels. Transactions defined by the protocol let two parties open a channel, transact over it, dispute/settle the process, and be sure that edge conditions like broken networks or malicious activities of the one party is handled by an off-chain state machine
  • needs to be universal. To achieve that, we implemented Generalized Accounts. It’s a way of defining custom logic to control an account. It can be another authorization algorithm or limit on wallet spending. It already proved to be useful during the last and final token migration hard fork. We can move all the forgotten or frozen tokens to the æternity blockchain from ERC20 in the initial Ethereum contract. They can wait in this state for rightful owners. This is possible thanks to implementing Ethereum authorisation scheme with Generalized accounts.

All of that is backed by the FATE VM and accessible through Sophia-language contracts. Both are focused on a blockchain-optimized design to provide more functionality and to lower the cost of operation. Sophia and the VM learn from all the challenges introduced by early blockchain platforms and languages, and provide more safety and security to users. FATE VM is high level VM as it operates on blockchain primitives like transaction. This is the source of low cost and high performance.

More documentation and examples are available in our documentation aggregator here.

Low level

From the very beginning, it was important for us to present basic blockchain functionalities. Early full node code was a naive implementation of the Bitcoin protocol. The Erlang technology is component-oriented and lets us easily optimize subsystems one by one, so we took this path. This way of evolving the p2p network and the node gave us a chance to reason about changes and performance. Eventually, we arrived to a complete feature set protected by the protocol.

Currently, any new feature is tested on 50 nodes on the public testnet. In communicated releases, it is also deployed to well-known hosts of the mainnet network that was built since mainnet release from over 16.000 peers. Each release is tested by a multi-level test framework: unit tests, integration tests, system tests simulating network with docker containers, and a very powerful quickcheck test framework that generates random feeds for APIs of a number of components.

Proof of Work

One of the key design goals for æternity is decentralization. We found the Proof-of-Work method of guarding the consensus as the most suitable for an open, public, and permissionless blockchain. We use the Cuckoo Cycle algorithm to implement PoW. This is a novel approach to memory intensive PoW algorithms with very cheap work evidence holding only 42 integers. Nevertheless, the last two years were very dynamic on the Proof-of-Work scene. ASICs achieved flexibility and life-cycles beyond predictions, GPUs were equipped with more complex operations, and at the same time, increased in level of parallelism. We consider going back to the topic of Proof-of-Work design to crystallize the best path to support a decentralized blockchain, which will be discussed in a separate post.

Consensus: Bitcoin-NG meets Aeternity-NG

Consensus algorithms are often misunderstood in the blockchain community. At the same time, consensus and the method used to protect it is not only a technical challenge, but also the heart of the product that we will offer to the end user. It will determine if transactions are trustless. It will determine if changes to the protocol are controlled by some groups. Finally, in the case of æternity, all the requirements revolve around users’ independence, decentralization, and freedom to transact.

The last couple of years were very fruitful in the Nakamoto-family consensus designs and adaptations of Byzantine Agreement algorithms. From a single list of blocks, through DAGs, lattices, hierarchies of trust, and PBFTs, there was a broad range of solutions to choose. Variants of choice multiply when we add Proof-of-Work, Proof-of-Stake, Space, Time, etc. By cross-checking with our goals and balancing them with the maturity of a solution, it became clear that in order to achieve our goals, we need to go with the Nakamoto way of dealing with an untrusted environment backed by Proof of Work.

So, how do we deal with 10-minute confirmation latency in a world of mobile-centric, real-time experience?

Cornell University researchers and the Bitcoin-NG paper offered some hope in this regard. [Bitcoin-NG: A Scalable Blockchain Protocol, Eyal, Gencer, Sirer, Renesse, 2016].

Bitcoin-NG is a novel way of supporting classic Nakamoto consensus. It is based on the election of a temporary leader by Proof-of-Work. The leader then publishes blocks containing transactions. We can treat those transactions as confirmed with 1 block — all of that within a matter of seconds!

Two phases of operations are reflected in the chain structure. We use key blocks to elect leaders and microblocks to hold transactions.
(Check out the Github repository).

The original paper describes the new chain structure and the crypto-economic incentives. It also covers problems like censorship, mining power, and fairness — all of which are essential for a blockchain platform. What we found the most challenging was reasoning and acting on forks and microforks. It required a couple of iterations of low level implementation of blocks, and communication with transaction handling components like mempool or block candidate generators. We also adjusted the Proof-of-Fraud mechanism. After all were rigorously tested, we waited six months for the mainnet release to bring real-world data and a bag of surprises…

Academia hits reality

Around midnight on the 27th of November 2018, the Roma release hit the live network. The genesis block held set of ERC20 tokens that were migrated to native aeternity tokens. Number of interesting events happened when real traffic and real users started to use the protocol. And what was the protocol?

We tuned Proof-of Work to mine Key blocks every 3 minutes, and we let the leader emit microblocks not faster than every 3 seconds. Each microblock can hold up to 6 000 000 of gas (which, given the initial implementation of the VM, is based on improved EVM, and is comparable to 8 000 000 in Ethereum blocks confirmed every ~15 seconds. FATE VM gives us even more leverage as it delivers more for the same amount of gas).

It means that with the network diameter in the range of 5–10 seconds, each key block should result in a 3-microblock fork. It is an expected and healthy behavior.

Our sync and gossip protocol had two modes: normal and light. The light one, gossiped only transaction hashes, making microblocks even lighter and faster. Assuming that there is a high probability that an unconfirmed transaction was already seen by peers, validation failure rate is low.

In the original paper, theoretical limits of network throughput were computed based on microblock size and capacity, and estimated at 100 tx/s. Our protocol’s theoretical limit was slightly higher as we allowed 300–400 spend transaction in one microblock, emitted every 3 seconds. Here, it is worth mentioning that we allow users to operate with stand alone transactions and with contracts calls. Underneath both, there is a common gas denominator.

The above number gives us a main constraint for network operations. The initial implementation allowed less than 50 tx/s without any backup. Further speed up was driven by optimizing the implementation of components like mempool or gossip.

Surely, the behavior of the network was also determined by crypto-economical incentives. We designed an inflation curve with very high reward over the initial 24 months (20% in the first year, 10% in the second, dropping to 5% in the third year, and 3% in the fourth year).

We initially assumed a steady network and mining power growth. What happened was quite the opposite! Within days, computation power grew so high that it made mining on personal computers and small mining rigs unfeasible (optimal cards were 1080Ti and 2080Ti). Users were very dissatisfied with the lack of opportunity to mine blocks. Also, we got reports about unconfirmed transactions and forks.

Problem 1
We discovered that we have an unexpectedly high number of forks. Upon inspection, we found that the network has many members, but the connection graph is very sparse. Broadcast of key blocks was affected. We had to improve (simplify) documentation and encourage users to use public IPs.

Problem 2
Less forks but long microforks. A microfork is a special state of the chain caused by next Key block miners following old microblocks emitted by the current leader. When it is short, it is fine. Unfortunately, we observed tens of blocks in microforks. We got mixed signals upon closer look, but decided it was still a broken network. We invested in uPnP and the network improved.

Early after the mainnet launch, the community and the core team implemented support for multi GPU rigs. It gave difficulty yet another boost. Users regained control over meaningful computational power to mine blocks. Of course, the target adjusted and lead to the formation of mining pools.

Problem 3
Mining pools introduced whole new dynamic. We discovered that they are based on customized stratum protocols and closed source mining clients. The problem we observed was again the growth of microforks’ lengths. It didn’t impact confirmation time, as even if a transaction is left in a fork, it is quickly reconfirmed by a new leader. But it is surely unhealthy behavior that in certain conditions can endanger the stability of peer nodes.

Dark corners of stratum
We discovered that pools are refreshing the key block candidate roughly every minute, which means that it can cause pessimistically 20 block long microforks. One minute is also a substantial part of the 3-minute key block generation. Key block generation varies a lot and it means that pools’ behavior can kick out our transaction to the next generation pretty often.

Our strategy here is to educate pools and diversify the pool landscape. This is a work in progress as currently, mining is taken by three main pools and two of them may take 95% of computational power.

We have another take on the problem. It is a work-in-progress reference implementation of stratum server that refreshes work in a Bitcoin-NG-friendly manner. Code speaks better than words, so maybe not only will it be used, but it may also inspire existing pools to adjust their configuration. Also, it may be an interesting base for working on a decentralization framework that makes pools out of full nodes instead of mining libraries. Bitcoin-NG will give the power to generate microblocks to regular users and pools will provide liquidity.

Having our bumpy road in mind, let’s look at the numbers.

Chain data offline analysis
As of the beginning of April 2019, we have mined around 60 000 key blocks. Below is an analysis of about 100 000 transactions.

How many Generations did it take to confirm transactions? If the number is higher than 0, it means that in the majority of cases, our transaction ended up in a microfork and was rewritten to the next generation. If the number is higher than 1, it means that an additional delay was in play.

It is interesting how 1-generation delay correlates with pool behavior. A 1-minute refresh time is 33% of a generation’s length (set to 3 minutes). More than half of the transactions were confirmed right away. The highest confirmation delay occurs in cases where network conditions impact the delivery of transaction to the leader. Some transactions may wait for a missing predecessor.

Equivalent in percentile:

Another take on that experiment is to analyze the average length of microforks. This is much harder to conduct as we need to have a monitoring “probe” in a given generation that got microforked. This is why the number of analyzed generations may differ.

The above information correlates with previous measurements. We showed that roughly 33% of transactions are rewritten to the next generation (and have a delay = 1 in our metric). Here, we show that 29% of microblocks is rewritten to the next generation. Given the nature of p2p networks and the monitoring method, 33% and 29% are close enough to confirm our observations.

A detailed view at microforks length breakdown (please, keep in mind that we have almost 7 microblocks per generations as of now:

Having a number of microblocks lower than the maximum per generation is caused by the lack of protocol constraint with regards to the minimum number of broadcast blocks. The silent leader in our system would be an equivalent to a miner of empty blocks in Bitcoin or Ethereum.

On the protocol side, the reason behind the microfork issue is the huge disproportion between coinbase award and transaction fees. In very young blockchains, fees are very low as there is no market driving them up. For the first 3 months, the minimal fee was also configured at very close to zero.

To sum it up, fees configuration resulted in very low-reward chains of microblocks that affected the crypto-economic incentives to follow the latest microblock. Over time, it will self-improve.

Even more numbers

Let’s browse through some interesting stats!

  • Max difficulty peaked at nearly 5 000 000. To give you some background: one solution of Cuckoo Cycle takes 0.3 sec on Nvidia 1080Ti card
  • 4.7M — transactions confirmed so far
  • 115 tx/s — highest transaction rate recorded
  • Over half a million registered names in the test namespace. New and final namespace coming with the Lima hard fork with name auctions!
  • 2–3 tx — average queue of transactions in mempool (waiting to be confirmed or garbage collected)

To conclude, we are very happy with the stability of the network. We will invest significant efforts to improve the microforking issue. It is worth mentioning that the NG protocol, due to its simple yet powerful design, significantly decreased the technical risks while implementing the solution. We provided built in mode of chain monitoring that is enabled by configuration.

While working on minimizing friction of securing on-chain transactions, it’s worth mentioning that none of the issues affect experience on customer-facing apps. The complexity of leader election and leader confirmation re-signing is completely hidden from the end user and doesn’t affect real-time experience.


Our near future is determined by features being developed in the main repository

  • Zero Knowledge Proofs support
  • Enhancements to Generalized Accounts to delegate payments
  • Support of any kind of data in Naming Service
  • More accessible powerful State Channels!

A number of upcoming improvements is described in our website, blog, and forum.

The main theme of our work will be optimizing the platform to enable real power of contracts!

We will also dedicate time to research ways of sharding traffic and state in order to lower the efforts needed to validate transactions.|

For those who are interested in learning more, I discussed this at length during the æternity Universe conference in September. You can watch my presentation here.

Michal Zajda



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store