Betting On Blockchain Consensus

This is a transcript of Sarah Azouvi’s presentation at Master Workshop: Layer 1 solutions, which took place at Primalbase AMS.

What is the differences between Bitcoin and traditional consensus protocols? Bitcoin is completely open. Participants can join and leave the protocol and we don’t even need to know who they are — it’s completely decentralised and open. This is opposed to traditional consensus protocols, where you need to have a fixed number of participants and you need to know who they are in advance. That’s a very big difference.

Another difference is that in Bitcoin there is only one message that is broadcast per round — the proof of work. Every miner on the proof of work broadcasts their block to the rest of the network and the rest of the network accepts it and builds on it. For those of you who are familiar with traditional consensus protocols, everyone needs to send a message to everyone, so there is a lot of message complexity that we don’t have with Bitcoin. The main reason we can have a completely open and decentralised set of participants and have only one message broadcast per round is due, mostly, to the proof of work and especially the incentives structure that is incorporated within. That’s one of the novelties of proof of work.

This has a cost. The cost is a really high energy consumption. I’m sure you’ve heard all of the estimations — I think the last one was that proof of work was consuming as much as one million planes flying constantly. Some people say that it consumes more energy than Iceland or Ecuador. There are a lot of estimations, but everyone agrees that it is very high. It’s also very slow — with proof of work, there is one message every 10 minutes. Ultimately, the headline is that it’s not very scalable — only 7 transactions per second as it is.

Beyond Proof of Work

Could we have the same warranties as Bitcoin, but without these problems? Could you have a blockchain without proof of work? This is something that is studied within the research community. One of the main protocols being studied is called proof of stake. The idea is that, if you think proof of work has a mechanism where the person who creates the block is elected with respect to their computational power, then proof of stakes will be the same except you get the right to create the block based on your amount of stake in the system. So, if you have 100 Bitcoin, you will be more likely to be ‘elected’ leader and to win the rights to create the block than someone who has 10 Bitcoin.

This way, blocks can be created faster, because we don’t need to have this 10-minute proof of work. You can create them instantly. But, there are some problems. One of them is called nothing at stake. This is the idea that if you can create a block instantly, without consuming any energy or any money, then imagine there is a fork — every miner in the network could just keep creating blocks on both forks because they don’t lose money by creating blocks. Then, miners are not really incentivised to reach consensus on one chain. They are just going to try to create blocks on every chain possible because they don’t know which chain is going to win.

Another problem is called grinding. Again, imagine you can just create as many blocks as you want, without spending any money or consuming anything. Then you can try to grind and see, maybe with the order of transactions, to create a block. That maybe can give you some advantage in the future.

Lastly, there are long-range attacks. The idea is that some attacker could bribe old participants in the protocol. For example, buying their secrecy or something and then rewriting the entire history of the blockchain. Because it doesn’t take time to create blocks, they could do that actually quite easily. Those are the three big problems associated with removing proof of work.

There are some solutions that have been proposed. For example, there are Algorand, Ouroboros, Snow-White. A lot of them, for example Algorand, is based on PBFT. This is based on more traditional consensus protocols where you exchange a lot of messages, so there is a big overhead of messages.

However, none of these protocols really look at incentives and the economic arguments behind their solutions. Our idea is to leverage this economic component that I talk about in order to move away from a PBFT-style, a traditional style protocol and have something that is more scalable.

The Importance of Incentives

The main idea is to focus on incentives. In blockchain, incentives matter. Why do they matter? There is a lot of failure in it. For example, if you take Bitcoin, there are a lot of attacks that have been found in incentives. Maybe you’ve heard about selfish mining, there are countless bribery attacks. All the attacks on incentives would not fit on one slide. But researchers don’t really know how to incorporate incentives into their design because it’s something that is quite new. We want to change that. We want to create a model that really takes incentives into account and produce protocols.

In our model, we want to consider rational players. If you’re familiar with game theory, you will be aware of rational players. They are players that act in a way to maximise their expected utility. They are not usually considered in the security community, this is very game-theoretic, but we wanted to incorporate them.

We want to have byzantine or malicious players . Those, on the other hand, are considered in the security community. You can think of them as just bad people. We don’t really know why they do what they do but you just think of the worst person that is going to try to attack your protocol, even if it means losing money. Unlike a rational player, they don’t care about their incentive.

And, lastly, we also want to be able to consider coalitions. We’ve talked previously about mining pools and we know that it’s a bit problem because it’s against decentralisation. If we have more centralisation there are a lot of problems, like 51% attacks etc. So we want our model to incorporate all these types of adversary.

Fortunately, there has been research within this area. What we are going to use is called the BAR model. This was presented by Jean Phillip Martin, who is the co-author. BAR stands for Byzantine, Altruistic and Rational. This means we are going to consider three types of player, and I’ve already explained who they are. Altruistic just means honest, a player that will follow the protocol, and rational means a player who just wants to maximise their profits.

In addition to the BAR model, we are going to consider a property called ‘robustness’. This has been introduced by Ittai Abraham and his co-author, and there are two main properties behind robustness. One is called ‘resilience’. The idea of resilience is to have a system that is resilient to coalition. The idea is that, if you have a coalition of two key players, they are not going to be incentivised to deviate from the protocol. So, their best interest, even if they form a coalition, is to follow the protocol. That’s pretty much equilibrium with coalitions.

The last property is ‘immunity’. Immunity means that if you are in a protocol with a malicious player (that doesn’t care about losing money), even if they are present they are not going to be able to harm the honest player. It means that for everyone who follows the protocol, their utility will not be decreased by malicious players.

In addition to these more game-theoretic properties, we also want to have traditional blockchain security properties. These have been studied within the community. There is chain growth — as the name suggests, it means that the chain grows, which is what we want to have with a blockchain. We want to have chain quality — the idea here is that an adversary cannot contribute more blocks than what they are supposed to. So, if we are in proof of work, you should create blocks with respect to your computational power and not more. And, lastly, we have common prefix. So, the idea is that we want every participant in the blockchain to have the same one, we want them to agree on their view of the blockchain.


There are two components to our Fantômette protocol. The first one is a leader election — remember that we said, with proof of work, you can kind of see it as a leader election process where the first person to solve the proof of work gets the right to create a block. We want to have a leader election to replace that. Instead of proof of work, we’re going to have some crypto involved and we are going to be able to elect someone.

We are going to use a publicly verifiable proof of eligibility — the idea is that you want to have a proof that says ‘look guys, I’m the leader’ that everyone can verify. This has to be publicly verifiable. The idea is that one block will elect at least one leader. Again, this is like in proof of work. You can think there can be forks, so there is at least one person that is going to be elected leader after a block. In addition to this leader election, we are going to present some incentive scheme to explain how we reach consensus.

I’ll start by talking about the leadership election process. What properties do we want for a leader election? First, we want fairness. I’ve talked about chain quality — an adversary should not contribute more blocks than they are supposed to. Fairness is the same; you want the leader elected in a fair way, you don’t want one person to be elected every time. Ideally, you want everyone elected, for example, with respect to their stake.

We want it to be unpredictable, so it means I can’t say, for example, that X is going to be the leader for the next block. This is because, if someone is able to say ‘I know who the next leader is going to be’, then they can mount an attack on X and they are not going to be able to broadcast his block. In addition to that, we also want private unpredictability. This means that I cannot even decide for myself if I am going to be a leader in the future. For example, I am only going to be able to predict if I am going to be a leader on the next block when I receive that next block.

Lastly, we want liveness. Again remember that I talked about chain growth, liveness is a similar idea. We don’t want someone to be able to prevent us from finding a leader. We want a leader to be elected every round. In order to do that, we are going to be using a random beacon. A random beacon is a pseudo-randomly generated number, and the idea is that this number is going to be associated with each block.

Now, I’m going to go into the details. So it’s slightly more technical. Our protocol is inspired by the one proposed by Silvio Micali, called Algorand. Very briefly, the idea is that we are going to initialise a random beacon, then we are going to use a verifiable random function. For those of you who have never heard about it, the idea is that you’re going to generate a key pair, public key and a secret key, and then using your public key you’re going to be able to generate a random number, but in a way that is verifiable. This means that I am going to generate a number and everyone, using my public key, can verify that I generated this number correctly and that I didn’t cheat in order to advantage myself.

Then, we are going to use a verifiable random function in order to create a new random beacon. Then, we check if this number is less than our target, and if this is the case then we are elected leader. That was a bit quick, so if you didn’t understand anything on this slide then don’t worry — you can still understand the rest of the protocol. I think the main idea to retain is that every block is just going to elect at least one new leader. Also, unlike Algorand, in order to achieve liveness we use a verifiable delay function.

Now, I will explain the broader protocol of Fantômette. Our Fantômette protocol, using blockDAG (which I believe you have already talked about). I’ll quickly recap what it is. In our protocol, a block is betting on its parent block. The way we use the word ‘bets’ is just to say that a block is created on top of another block. So, this is similar to blockchain. However, because we use blockDAG, we also have that a block references other blocks.

Let me illustrate this [above]. Here, you see that, for example, block C was elected leader on top of block A, so block C ‘bets’ on block A. We also say that block C is going to say ‘look, I’m creating a block, I’m extending this chain, however I’m also aware that there is block B. I’m not extending this chain but I am aware of it.’ That’s the idea of blockDAG. Maybe the difference between our blockDAG and another one is that these two links are not the same. There is still the notion of a chain, even though we have a DAG structure.

In our protocol, the idea is that the better connection a block will have, the better its score is going to be. For example, we see that here [below], C has a lot of connections, so it’s going to have a higher score than block D, which has only one. Now, we want to break the tie using the random beacon that I mentioned previously. Just imagine that within each block, there is a number associated. So, people who see both A and B, they’re going to say ‘these look the same, how am I going to choose between them?’ So, you just choose the one that has the smaller number. For example, here we have 10 and 12, so we say that block A wins. That’s a way to tell a player how to choose the main chain.

Now, the main idea behind the protocol is that a block can only reference blocks with a smaller score. Here, C is choosing block A (the winner), and then it is referencing block B (the loser). However, D (which is betting on the loser), is not allowed to reference block A. The idea behind that is that we want to catch cheaters. If block D was referencing block A, then it would be the same as if block D was saying ‘look, I’m aware that block A is the winner, but I don’t care, I’m going to choose the loser because I just prefer B — I have incentives to choose it.’ We don’t want to have that. That’s why this block will not be valid. The only way that block D can be valid is if it doesn’t reference A.

The idea behind that is that the main chain is going to grow faster. Because this is the main idea behind the protocol, I’m going to illustrate it one more time just to make sure it’s clear. So here, it’s clear that this chain [A → C] is the winning chain and this one [B → D] is the losing one. If someone wants to extend the losing chain, the only way they can do so is by pretending not to be aware of the winning chain, like block E here [above]. Whereas, if someone wants to extend the main chain, they can absolutely reference the losing chain, because they are following the rule. That’s the idea — if someone wants to cheat, they won’t be able to, because their chain is going to have a lower score due to this ‘connectivity’.

Now, in terms of incentives, we are going to rework connectivity. So, as I said, a block is going to have a higher score if it is better connected to the rest of the chain, and also the participant is going to get a higher financial reward. Blocks that are not connected well enough, so it’s someone who is trying to cheat and create an alternative chain, they will get a financial punishment.

Let’s go back to our security properties. Remember I talked about robustness — I said that we wanted the protocol to have the property that the coalition is incentivised to follow the protocol, and also that malicious players cannot harm honest players. The reason we have that is because we incentivise people to reference other blocks and blocks are more likely to win if they follow the protocol. Because blocks want to have as many connections as possible, you are also incentivised to publish your block as fast as possible — you want other players to be aware of your block so that they can reference it. That’s how we achieve robustness.

I also spoke about the security property associated with blockchains. We had chain growth and common prefix, the idea was that the chain is growing and also people have the same view of the chain. I put these two properties into one that I call ‘convergence’. The main reason we have this property is to allow the score of the main chain to grow faster, so even if an adversary wanted to create an alternate chain, the score of that alternate chain would be smaller than the main chain. Finally, chain quality — this idea of fairness. This is achieved with the fair leader election which I mentioned previously.

You’ll remember I told you about these attacks called ‘long-range attacks’. Rgis is the idea that an adversary could, at almost no cost, rewrite the entire history of the blockchain. In order to thwart this attack, we add decentralised checkpointing. Maybe I won’t go into too much detail, but if some of you are familiar with traditional consensus protocols. We use a similar idea but incorporated within the DAG.

In addition to that, in our paper we also have some simulation. We simulate the protocol and see how it reacts to coalitions of players trying everything to maximise their expectation and how it reacts to a coalition of players that try to financially harm the other players. We drew their payoff and we observed that a coalition is not really incentivised to deviate from the protocol because the gain from doing so was very small. Also, similarly, we saw that if there was a player trying very hard to harm the honest player, this was really limited — they could do it only by a small fraction.

We also drew the longest force that a malicious player could do. What we need to have in mind is that, because blocks are created much faster, it’s ok to have a longer block. It’s not like we have to wait for 10 blocks to have a confirmation. Similarly, we quantify the trend quality and see if players were allowed to have the right amount of blocks. This, again, was ok in our simulation.

To conclude, we use blockDAG in order to enforce accountability. We know what the players are doing and we forbid them from cheating. Also, we incentivise the rational players to follow the protocol using an incentive scheme that rewards connectivity. We leverage incentive to have a blockchain type of consensus protocol, unlike other proof of stake protocols that are doing more like a PBFT protocol. Blocks are created faster, so also this is more scalable.

You can find the paper online if you want to read it.


Is the model for one-third bad guys?

One third, yeah

Could you give me a bit of an understanding of why you couldn’t just take Algorand, say, and add some sort of incentive onto that? What is your system doing, I suppose, that Algorand couldn’t do by adding incentives?

I’m sure what we do is actually very different because Algorand has a PBFT style. So, you know, there is a big overhead. What we do here, is we have nothing like that. We just have people who broadcast their block and the chain is growing as long as people are broadcasting their blocks. We just don’t have the PBFT style that Algorand has, that’s basically it.

On one slide, you showed that you have this blockDAG, right. You have the scoring mechanism, and you said that you use randomness to break the tie. So, my question is, if you have this DAG and the last two blocks basically have the same score, that’s where you use the randomness to break it?

Absolutely, yeah.

So why, as a participant, could I not just decide locally which one I want to extend? Because the quality of both are the same, right?


Well, my second question is: how do you implement the public randomness beacon?

So, the idea is that we’re going to bootstrap from proof of work to proof of stake and, in order to initialise the random beacon, we’re just going to do a normal coin tossing protocol, which is kind of like commit and reveal using secret sharing. You could use any coin tossing protocol. It has a lot of overheads, I agree, but then we only do it once. So, that’s the first question.

Let’s imagine that here you have A and B that have the same score. What you’re saying is that this participant, D, can just choose that B is going to be the winner, right? So they can definitely do that, but the thing is that if they do that, they are not allowed to reference A. Otherwise, someone is going to receive block D and see that it is choosing B, even though D was aware of A, which has a smaller random beacon. So the cheating is obvious. Does that make sense? The random beacon is included in the block.

Where does the connectivity come from?

That’s a good question. When I say connectivity, it’s really in terms of the blockDAG. What I mean is basically here, if you take the subDAG in use by C, how many links does it have? 1, 2, 3, 4. That’s what I mean by connectivity, not in terms of network layer or anything, it’s really in terms of the structure of the blockDAG.

I was wondering if you could speak more about the checkpointing. What kind of communication happens during it? Also if some keys before the checkpoint get compromised in their proof of stake voting, how would that go on? If I had some keys from before the checkpoint, their stake doesn’t matter anymore?

Ok, so because we are based on proof of stake, we kind of know who the participants are, because they are the people who have a stake in the system. This is something we have that we don’t have in proof of work, we have a list of who the people participating in the protocol are. You can join if you want, but you have to deposit some stake. So that’s actually an important concept.

Now, we are going to say that, for example, these two blocks, X1 and X2, are candidate blocks. Then, people are going to create blocks either on top of X1 or on top of X2 right? Whenever we have, for example, two-thirds of the participants have created a block on either of these, we say that X1 and X2 are justified. And then we do the same — whenever there is two-thirds or more of the participants who have placed on either Y1 or Y2, we say that X1 and X2 are finalised.

The idea is that, once these blocks [Z1 and Z2] exist, then due to the properties of the DAG, there has to be some cross reference across the chain. After these [Z] blocks, an honest player would not accept a competing chain. That’s why I said it was kind of similar to traditional byzantine protocols, because you can think of these as prepare and commit.

So, would it be correct to say that essentially older blocks are accumulating their votes and at some point they get proper finality, and all the other blocks that are still accumulating their two-thirds have economic finality?

Yeah, that’s a very good summary.

Just to continue this question, imagine I have some stake in the genesis block. At some point, I sold all my coins but I have these private keys for my coins, and I continue to build an alternative history and to put checkpoints on this history. When a new participant joins, they can see two subDAGs or two chains of the same length, and all histories are correct and all checkpoints are correct. How do you handle it?

You cannot actually have decentralised checkpointing unless you have more than two-thirds of the keys, right? Whenever an honest participant is starting to create a new block after this, then they will not accept the competing chain. So, if you just broadcast your competing chain, they are not going to accept it. If you are not an honest participant, then you can continue pointing to this alternative chain, but two-thirds of the players are just going to ignore it completely.

I was wondering if you looked at Ethereum’s proof of stake proposal? This completely reminded me of Ethereum’s proposal and if you could outline the differences?

I’ve been looking at it, but I think it’s a bit hard to follow it. The version that they have online, for example on GitHub, it says ‘draft’ and there are some to-dos and things like that. I’m following it. I mean, to be fair, this idea of slashing, of punishing people, is inspired by Casper. But then I think it’s a bit hard to evaluate because they don’t really have like, proof, and they don’t have a full protocol, they have to-dos in there. I’m aware of it, and definitely some of it was inspired by Ethereum.

If you’re looking for a venue for your tech event, email and we’ll find a solution that works for you.