ICE-FROST: Identifiable Cheating Entity FROST Signature Protocol
Cryptography is at the heart of any blockchain project. A good deal of their useful properties (immutability, security, public verifiability…) comes from cryptographic protocols, which themselves rely on common and well-studied mathematical problems.
Among them, Toposware uses its very own digital signature scheme to verify the authenticity of certificates — data structures that certify cross-subnet messages. While it is based on an existing construction, we modified it to fit our needs better by adding features the original version was lacking.
In this blog post, we’ll not only explain how ICE-FROST, our static Schnorr-based threshold signature, works, but also the whole process behind our design choices.
The Schnorr algorithm
Digital signatures are everywhere, and if you’re not familiar with them, the Wikipedia page provides a good overview of the subject. To sum up the key properties, a digital signature must:
- Prove that someone possessing a (secret) signing key signed a message;
- Guarantee that the message was not tampered with between the signature creation and its verification;
- Be verifiable by anybody possessing the (usually public) verifying key.
There are countless different schemes, but blockchain mostly uses three of them: ECDSA, Schnorr and BLS. All of them have their advantages and their drawbacks, but they all can efficiently provide the three properties above.
In the Topos ecosystem, each subnet regularly publishes a certificate containing all cross-subnet transactions. To ensure the certificates are authentic, they are signed by the subnet’s validators. We decided to use Schnorr signatures for their shorter length, nice aggregation properties and the lack of pairing-based cryptography. As always, Wikipedia is a good source to learn more about it.
This scheme is simple to implement and reasonably secure under classic security assumptions. Still, we can do better.
One major issue with the standard Schnorr algorithm is that you only have one signer, with a single secret signing key. This is fine for a normal transaction, but not for a certificate: it is supposed to originate from an entire subnet, with potentially hundreds (or thousands!) of participants. Since blockchains are all about decentralization, we wouldn’t want a single entity to be in charge of the signing of such an important piece of data.
That’s where threshold signatures come into play: in a
(t,n)-threshold signature scheme, there are
n different signers, each of them having their own signing keys. To sign a message, you need the cooperation of at least
t of them.
In our case, we want our subnet to nominate
n potential signers, and require that at least
t of them cooperate to sign a message. You can think of
n-t as a leeway to account for offline nodes, or malicious actors who would want to disrupt the protocol.
How do you do it? First, you need to split a private signing key into
n shares while making sure that:
t-1or less shares cannot produce a valid Schnorr signature.
tor more shares can produce a valid Schnorr signature.
To do that, we use Shamir’s Secret Sharing based on polynomial interpolation.
Second, we need a way to distribute these shares among the
n participants securely. The naive approach would be to have a dealer generate a key, split it, and send the shares to the participants. But we don’t want a single entity to have that much power, and even if we did, how could a participant be sure that they received a correct share and not random data? Fortunately, Verifiable Secret Sharing solves both problems. Basically, each participant creates a part of the signing key, and send (privately) subshares to all the other participants. The other participants can verify that a subshare is correct, and add all of their subshares to get their shares. The group signing key, which is the sum of all the created parts, is never reconstructed. On the other hand, the group verification key can be computed by anyone.
Finally, we must make sure that shares can actually be used to sign a message. And they can: because of previously mentioned aggregation properties, each participant with a key share can produce a signature share. Then, with at least
t signature shares, you can produce a valid signature with regard to the group verifying key.
So we now have a threshold Schnorr signature. Are we done yet? Not exactly.
The FROST Protocol
The basic construction from the previous section has several security flaws, especially during the signing phase. In particular, it’s possible for an adversary to combine different signature shares to sign a different message than the one intended (the Drijvers attack).
There are different ways to mitigate it, but they all require extra rounds of communication between the participants, which means a longer running time and a greater sensitivity to network perturbations. A good compromise is achieved by Flexible Round-Optimized Schnorr Threshold Signatures, or FROST.
In this construction, Komlo and Goldberg introduce random nonces that each participant (secretly) produces, and corresponding commitments that each participant broadcasts. Now a signature can be safely generated in two rounds of communication, or just one if a preprocessing step has been completed beforehand.
The downside is that all signers must be honest. Even if just one of them submits an incorrect signing share, then the entire signing phase must be rerun. Still, a bad share can at least be detected, and the malicious signer can be excluded from the reruns.
FROST is what our own protocol is heavily based on, but we added little extras.
In the key generation phase of the FROST protocol, when a participant (secretly) receives a bad subshare, the protocol aborts. This is because there is no way for the others to tell if the complaining participant really received a bad share or if they are lying and maliciously accusing the sender. We solve this matter by introducing cheating identifiability, which greatly improves the robustness of the scheme.
Each pair of participants agrees on a key to encrypt and decrypt their mutual subshares. The subshares are then encrypted and broadcast for everyone to see. In case of complaint, it’s possible for the receiver of a bad share to reveal the shared key, and prove that the key is indeed correct. It allows us to always determine who is at fault between the complainer and the accused.
Finally, we added static keys. In FROST, if you want to change the
n participants, you need to generate a new group signing key, which means a new group verification key. This means that other subnets must always update their group verification keys after each change of participants. This is very cumbersome and error-prone.
Instead, we modified the scheme so that it’s possible for a group of
n participants to reshare the key to new participants: the old participants send subshares to the new participants which add up to fresh shares of the same old secret key. Since the group verification key doesn’t change either, a subnet can use the same key during its entire lifetime.
And voilà! Our final protocol is called Static ICE-FROST (ICE stands for Identifiable Cheating Entity) and we implemented it in Rust.
Link to the paper: https://eprint.iacr.org/2021/1658.pdf
Link to the GitHub repository: https://github.com/Toposware/frost