Master Workshop Recap: Layer I Scaling Solutions

Research Institute
Research Institute
Published in
8 min readMay 9, 2019

Issues surrounding security, increased centralisation and, perhaps more pertinently, scalability is depicted as the bane of blockchain development. A probably more hyperbolic representation is a vortex of contending solutions responding to a seemingly never-ending cycle of limitations and deficiencies.

The unfortunate truth of Nakamoto consensus is that the breakthrough feature that made distributed trustless consensus possible is, ironically, what is hindering its wider-deployment and ability to scale. Although, some would argue that is by design.

While blockchains have long been at the forefront of Decentralized Ledger Technologies (DLTs), to build a truly decentralised ledger that can scale, it may be time to “forego the ‘blocks’ and ‘chain’ entirely”. Judging by a number of the proposals presented at the Master Workshop: Layer I Solutions hosted by RI, there may be a real argument for this.

Meanwhile, Master Workshop: Layer II got into a whole variety of other possible solutions, here is a recap.

Directed Acyclic Graphs

Many view Directed Acyclic Graphs (or DAGs) as just a coexisting technology with different use cases to blockchains, however, there are others who view this data structure as being set to render the linear chain of blocks we have come accustomed to irrelevant.

DAGs are not a new concept but have long been a mainstay of computer science and graph theory. Essentially, they are directed asynchronous graphs — graphs meaning a network of connected nodes in this context — that use topological ordering. These nodes (also referred to as vertices) are connected by edges that only allow information (i.e. transactions) to flow in one direction, meaning that no node can connect to a previous node. Acyclicity refers to the fact that information doesn’t move through the graph in cycles, so the same node cannot be encountered twice when following the edges.

In a very general sense, blockchains also share this logic. However, where transactions are listed and grouped into a single chain of blocks, in a DAGs’ tree-like structure, transactions are directly linked, and multiple chains can exist simultaneously. This allows for incomparably higher throughput than on a blockchain.

A New Proof of Work

In what would become the paper behind Graphchain — Blockchain-Free Cryptocurrencies: A Framework for Truly Decentralised Fast Transactions — Christopher Carr, Xavier Boyen and Thomas Haines offer a DAG-based cryptocurrency framework as a solution to scalability. As outlined by Carr, Graphchain aims to “establish an alternative way of creating a distributed cryptocurrency that avoids bottlenecks and centralisation issues”. In this transaction-based decentralised ledger, “transactions are both transactional and structural” which negates the need for blocks or hash functions. However, it interestingly invokes a consensus mechanism based on an “implicitly collaborative” proof of work model.

For the sake of reaching consensus and avoiding double spending, the classic proof of work model is designedly resource intensive. Unfortunately, for miners “the rewards are few and far between, and the only fair and secure method of distributing them is, in essence, a lottery”. This has lead to the dominance of mining pools, which essentially increase centralisation and make the network “more brittle to a variety of attacks, not only the well-known 51% attack, but also the “selfish miner” or 33% attack, which in some special cases can become a 25% attack”. In response to this, Graphchain offers a fully decentralised proof of work that doesn’t require block consolidation but functions to quantify the computational effort directed at solving a puzzle (incidentally, not dissimilar to what mining pools do). The purpose of this is to ensure that all participants, of varying speed and resources, are rewarded for their individual effort towards securing the network. This is thanks to the parallelism between branches.

This incentivisation system purportedly supports global consensus, the timely verification of transactions and convergence by encouraging the affirmation of the most recent transactions. The DAG structure means that all transactions are eventually verified — indirectly or directly — by all new transactions, making it immutable.

Moreover, the strong emphasis on predictable incentives potentially solves many of the drawbacks of blockchain protocols, including centralisation, security and latency. Ultimately, this redesign of the base layer means that it is naturally self-regulating and offers a completely decentralised verification process.

Blockchain Killer?

Hashgraph, the ascribed “blockchain-killer”, is probably one of the most talked about DAG-based network structures for cryptocurrencies. The consensus mechanism developed to run on Hedera, a decentralised network and platform, is based on a reimagining of existing protocols to work at scale — namely, voting and gossip protocol. Hashgraph promises a new way of arriving at a distributed consensus without the seemingly inevitable tradeoff between maximising security and performance. In comparison to blockchain, CEO Mance Harmon goes in so far as to describe the protocol as equivalent to the “difference between a calculator and a computer in terms of performance”.

Most protocols use some form of gossip which, in a very general sense, is the process whereby your computer [A] randomly tells another computer [B] about an event (which, in this context, refers to a transaction or information) you’ve created. B then responds by telling you about any events it has heard about before going on to tell another computer [C] about your event and all the others it has heard about. C then tell computer B about all the events it has heard about, and so on. Hashgraph’s adaptation of this protocol — “gossip about gossip” — adds further information to every gossip event, allowing the network to learn what the rest of the network knows without every node talking to each other.

Each time a computer tells another computer about an event it heard about, it also includes the time it heard about it, who from, the time that computer heard about it and who they heard it from, and so on. The result of this is that not only does everyone knows what everyone else knows, they know exactly when they knew — all in just a fraction of a second. Additionally, to ensure that the information received actually came from the said source, it is time stamped, and hashes are added to the event below and from where it came from.

The second component of Hashgraph that makes it so compelling is its voting protocol. In his talk at the Workshop, Hashgraph Lead Developer Advocate, Greg Scullard emphasised the fact that as it stands, a voting-based consensus is the “gold standard” however, it scales poorly. With their new iteration — virtual voting — Hashgraph claims to offer the same advantages as a voting-based consensus but without the disadvantages. Virtual voting can be understood as “an algorithm that calculates, in a Byzantine manner, the timestamp of transactions from two-thirds of the network or more”. Primarily, because everyone knows what everyone else knows, you can mathematically calculate with absolute certainty how they would vote. This means that consensus is achieved almost instantly.

So instead of recording things on a block and then adding that block to the blockchain every ten minutes, Hashgraph events are added to the network the instant that they are created. They, therefore, don’t have ten minutes of information recorded in them and, as a result, are smaller, carry less data, require less bandwidth, are easier to transmit and require less power. Ultimately, this network is significantly more efficient than the blockchain and exponentially fast — processing 10,000 crypto transaction per shard and reaching consensus in 3–7 secs.

“BlockDAGs”

Sarah Azouvi, UCL PhD researcher, posed the question “can you have a blockchain without the proof of work?”. The answer, of course, is yes as many blockchains operate on proof of stake, however — as Azouvi pointed out — while blocks are created faster, there are still many disadvantages which pose a threat to the security of the system.

These include:

  • “Nothing at stake” where, in the event of a fork, there is a lack of incentive to reach consensus on one chain;
  • stake grinding where adversaries use their computational power to increase the probability of them becoming the leader by biasing the randomness in their favour;
  • and long-range attacks where malicious actors attempt to rewrite the blockchain history.

In her paper “Betting on Blockchain Consensus with Fantomette”, co-authored with Patrick McCorry and Sarah Meiklejohn, Azouvi proposes Fantomette a new “BlockDAG”-based consensus protocol that focuses on the issue leader election and, much like Graphchain, looks at incentivization in terms of security.

Leader election is inherent to consensus protocols; however, due to their design and arguable purpose, they are notoriously hard to scale. The Fantomette paper makes the point that the underlying leader election protocol of classical consensus is one of the biggest obstacles to consensus and proposes its Caucus implementation as a possible solution to this. Designed for open blockchains, Caucus ultimately satisfies a traditional understanding of security while also ensuring that leaders are only revealed when they take action. This protects against denial-of-service attacks, which leave them vulnerable when eligibility is revealed ahead of time.

Incentivisation — described in this paper as a “first-class concern” — is at the core of its security. Most non-proof of work based proposals working to address the question of incentivisation have a very basic game theoretic analysis; they don’t take into account the diversity of actors or players, neither do they create a system that can necessarily tolerate their presence. Fantomette uses a BAR model — meaning that it considers byzantine, altruistic and rational players.

Fantomette puts forward an incentivisation scheme that does without resource-intensive proof of work. However, this, of course, removes the “implicit investment” made by miners in the form of hardware and security. To compensate for this, the protocol adds in explicit punishment but acknowledges that this is difficult to achieve in a blockchain with a straightforward fork choice where the longest chain wins. To remedy this, Fantomette suggests a more complex fork rule that requires participants to make a security deposit. This allows for the inclusion of punishment as it gives players something to lose — an incentive for rational actors.

This more complex fork rule is where the notion of blockDAG comes into play. As its name suggests, a block DAG is a directed acyclic graph of blocks; while each block still refers to a single parent block as in a blockchain, it also references other recent blocks that it is aware of — “leaf blocks”. This latter property allows players to prove that they’re following the rules and allows for the connectedness of the network. Ultimately, the Fantomette protocol operates by asking participants to bet on the block with the strongest score, while also referencing the “leaves” that they’re aware of. A block becomes valid if it has the highest score.

Ultimately, Fantomette bypasses the latency issues of proof of work and may have potentially overcome the aforementioned threats to proof of stake.

Final Thoughts

It is evident that the desire for scalability has directed attention to the limitations of PoW specifically. The data structure of a DAG means that it can operate at a larger scale to a blockchain as, in theory, there is no limit on the transaction throughput. Therefore, DAGs are seen as a solution to the inherent scalability problems of blockchain. They are more energy efficient than PoW while maintaining tokenisation, full decentralisation and the security that comes from this (i.e. no 51% threat).

On this, Patrick McCorry agrees that DAGs are exciting — noting that, unlike Bitcoin and Ethereum, they encourage forks on the network — but makes the point that “while it can strictly increase the network’s throughput, there is also significant overhead as a good portion of blocks may not be included in the final chain”. Ultimately, while it is tempting to view DAGs as the solution to end all solutions or the “blockchain killer” that promises faster transactions, we need to ask — what we are trading to achieve this?

--

--

Research Institute
Research Institute

Research, Development, and Implementation of Technology Solutions — researchinstitute.io