Ledger Key Recovery: Understanding the Principles of MPC Wallets

Numen Cyber Labs
Numen Cyber Labs
Published in
19 min readMay 26, 2023

Introduction

On the night of May 16th, Ledger, the world’s largest cold wallet company, made a significant announcement. Charles Guillemet, the company’s technology lead, unveiled the latest addition to their product lineup called “Ledger Recover.” This innovative solution allows users to create or input their personal wallet private key through a meticulous dual identity verification process involving passport and ID card authentication.

Once obtained, the private key is cleverly split into three fragments, each entrusted to a separate independent company for safekeeping. In the event of a user losing their private key, Ledger Recover empowers them to apply to any two of these storage companies to restore a new private key, thus regaining access to their virtual wallet.

Originally conceived with the aim of enhancing asset security and simplifying operations, this new service has encountered strong user dissatisfaction. Many users argue that the private key should remain within the confines of the encryption chip, raising concerns about potential collusion and privacy breaches involving the three storage companies.

So, what exactly is the underlying technology behind Ledger’s new service? How secure is this technology? And how can user concerns be addressed? This article will provide a detailed explanation of the technical principles behind this new service.

1. Introduction to MPC

“Not your keys, not your assets” This well-known maxim resonates strongly within the cryptocurrency industry, emphasizing the criticality of safeguarding private keys to protect one’s digital assets. However, existing decentralized cryptocurrency wallets often require users to either memorize lengthy mnemonic phrases or securely store private key files. This process is not only inconvenient but also prone to asset loss due to misplaced keys or forgotten phrases.

Moreover, hackers employ various tactics to steal private keys and mnemonics, amplifying the risk of asset loss. Consequently, the entry barrier for users seeking to enter the Web3 ecosystem becomes remarkably high, hindering potential growth.

So, how can this predicament be resolved? In recent times, several prominent platforms and wallet developers have introduced MPC wallet solutions, touting them as more secure alternatives to traditional wallets with lower operational barriers. Ledger’s key recovery service is one such solution.

But what lies at the core of this approach? Enter MPC, or Multi-Party Computation, a fascinating “black technology.” MPC refers to the collaborative computation of an agreed function by multiple participants in a trustless multi-user network. In this scenario, no single trusted entity is involved. Not only do the participants obtain the computation results, but they also cannot deduce each other’s original data from the interactive data exchange during the computation process.

In 1982, Academician Andrew Yao presented the renowned “Yao’s Millionaires’ Problem”: Two millionaires, Alice and Bob, meet on the street and desire to determine who is wealthier without revealing their exact wealth to each other. How can they achieve this without involving a third party? Let’s assume Alice has a private number “a,” and Bob has a private number “b.” Their objective is to establish whether the inequality “a ≤ b” holds true or not. Importantly, apart from this determination, no further information about “a” or “b” should be revealed. Academician Yao addressed this problem using oblivious transfer and garbled circuits, paving the way for privacy-preserving computations in MPC.

Beyond this simplified problem, MPC has evolved to encompass various challenges, including aggregation, union, mathematical calculations, and multi-signatures, all of which have been correspondingly addressed in its development. One solution to the multi-signature problem is the Threshold Signature scheme (TSS), which forms the basis for contemporary MPC wallets.

2. TSS Principles

The Threshold Signature Scheme (TSS) encompasses a class of signature schemes designed to enhance security and mitigate the risks associated with a single point of failure. At its core, the TSS approach involves dividing the private key K into multiple parts and requiring a defined threshold number of these parts to collaborate in order to generate a signature, effectively achieving the same outcome as signing with the complete private key.

The fundamental principle of threshold signatures is rooted in threshold cryptography, a branch of cryptography that focuses on secret splitting, whereby a secret is divided into multiple parts, with at least a threshold number of parts required to reconstruct the original secret. In a threshold signature represented as {t, n}, the private key is partitioned into n parts, and a valid signature necessitates the involvement of at least t of these parts. Importantly, regardless of the number of combined private key fragments, only the signature can be generated, ensuring the full private key or any individual fragment remains undisclosed.

Within this process, two primary challenges must be addressed. First, there is the matter of generating private key fragments without reliance on a trusted third party, which is commonly known as the Distributed Key Generation (DKG) process. The DKG process employs diverse secret sharing algorithms and secret parameters, leading to different variations of threshold signature algorithms, each with its own distinctive characteristics.

2.1 DKG Process

The DKG process is essentially a secret-sharing process without a centralized distributor (Dealer). Before delving into the DKG process, let’s first understand what secret sharing is.

2.1.1 Secret Sharing

Secret sharing is a crucial technique in the realm of information security and data confidentiality, playing a vital role in securely storing, transmitting, and utilizing sensitive information and secret data. The concept of secret sharing was initially introduced by Adi Shamir, the distinguished recipient of the 2002 Turing Award, in 1979. Shamir devised secret sharing algorithms to address the challenge of splitting a secret value into n parts and enabling its reconstruction when t (where t ≤ n) members provide their respective shares.

Secret sharing involves two fundamental algorithms: the secret share distribution algorithm and the secret recovery algorithm. In the secret share distribution algorithm, the distributor, also known as the Dealer, divides the secret into multiple shares or pieces, distributing them among a group of participants. Each participant receives a secret share that relates to the original secret. The secret recovery algorithm ensures that only specific subsets of participants, referred to as qualified subsets, can successfully recover the secret. Other subsets, however, are unable to retrieve the secret or gain any meaningful information about it.

The core principle underlying Shamir’s Secret Sharing (SSS) algorithm lies in the unique determination of polynomials by points. Specifically, two points determine a linear polynomial (a straight line), three points determine a quadratic polynomial (a parabola), and so on. Therefore, t points can uniquely determine a polynomial of degree (t-1). By constructing a (t-1)-degree polynomial, where each of the n members corresponds to a point on the polynomial, the recovery of the (t-1)-degree polynomial only requires t points. As a result, any t members possessing the necessary points can recover the same (t-1)-degree polynomial. By encoding the secret value into the polynomial, specifically using the y-value when x is 0 as the secret value (which corresponds to the constant term of the polynomial), the original secret value can be successfully recovered.

The diagram provided below illustrates the underlying principle of a {3,4} secret sharing scheme:

Since t = 3, we can construct a 2(t-1)-degree polynomial:

f(x) = s + a₁x + a₂x²

In this context, “s” represents the secret value, while “a₁” and “a₂” are random numbers. Let’s denote the values of “f(x)” as partial secrets “sᵢ” for “x” taking on the values of 1, 2, 3, and 4. These partial secrets are then distributed to four members through the assistance of a trusted third party, often referred to as the Dealer. Each member is assigned a specific point on the polynomial, with x-values corresponding to 1, 2, 3, and 4.

To reconstruct the secret value “s” using three partial secrets, the following steps are involved: The three members contribute their respective partial secrets, forming a set of points such as (1, s₁), (2, s₂), and (3, s₃). These points are utilized to reconstruct a unique 2-degree polynomial, denoted as “h(x).” The specific method used to reconstruct the polynomial is typically the Lagrange interpolation method, however, the intricacies of the process will not be explored here.

It is important to note that the reconstructed polynomial “h(x)” is equivalent to the original polynomial “f(x),” thereby implying that “s = h(0) = f(0).”

2.1.2 Verifiable Secret Sharing (VSS)

In the SSS (Shamir’s Secret Sharing) scheme, it is assumed that the Dealer is trusted. However, if the Dealer behaves maliciously by distributing incorrect values as partial secrets to the participants, it can have severe consequences, resulting in the inability to recover the correct secret value.

In 1987, Feldman introduced a practical scheme for Non-interactive Verifiable Secret Sharing (VSS) in his paper. The purpose of this scheme is to detect any malicious behaviour on the part of the Dealer. It is worth mentioning that the concept of VSS was initially proposed by Benny Chor, Shafi Goldwasser, Silvio Micali, and others in their 1985 paper titled “Verifiable secret sharing and achieving simultaneity in the presence of faults.

Feldman’s paper presents a scheme that relies on polynomial coefficient commitment. In addition to distributing secret fragments, commitments are broadcasted as part of the process.

The polynomial coefficients are represented by “aᵢ.” Each participant, denoted as Pᵢ, verifies the received partial secret “sᵢ” by leveraging the properties of discrete logarithm operations. The verification is successful if the equation:

holds true. Here, the equation involves the coefficients c₀, c₁, c₂, …, cₜ and the variable “i.” If the equation does not hold, the protocol is terminated.

In 1991, Pedersen proposed Pedersen’s Verifiable Secret Sharing (VSS) scheme in his paper titled “Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing.” Unlike Feldman’s scheme, Pedersen’s scheme utilizes Pedersen commitments in the form C = mG + nH. In this context, “m” is derived from the coefficients of the secret key constructing polynomial f(x), while “n” is derived from the coefficients of another randomly generated polynomial h(x) by the Dealer. The set of commitments {Cᵢ, 0 < i < t} serves as publicly accessible evidence, demonstrating that the Dealer acknowledges only one legitimate key polynomial.

In terms of security, Pedersen’s VSS is unconditionally secure, while Feldman’s scheme, due to the public knowledge of g^(sᵢ) = g^(a₀), is considered computationally secure. However, Feldman’s scheme tends to be more efficient in practical implementations compared to Pedersen’s scheme.

2.1.3 Pedersen DKG

A true Distributed Key Generation (DKG) eliminates the central authority present in Verifiable Secret Sharing (VSS) schemes, enabling the generation of secrets in a distributed and collaborative manner, thereby avoiding the risk of a single point of compromise. The principle is simple: each of the n participants simultaneously selects their own secret value and runs their own VSS protocol. Each participant collects secret shares from other participants to assemble their own share of the true private key. The collection of shares distributed by legitimate participants forms the final constructed private key, which is verified using commitment.

Taking the example of a {2,3} threshold signature, the polynomial used is f(x) = s + a₁x, which should be familiar to most. Let’s assume that each participant generates a similar polynomial as follows:

A: f(x) = 2x + 5

B: f(x) = 9x + 7

C: f(x) = x + 3

In reality, these three polynomials can be summed up to form the final secret key polynomial f(x) = 12x + 15, where 15 is the ultimate aggregated key. If a Dealer is involved, this polynomial would be the key-sharing polynomial generated by the Dealer.

However, participants A, B, and C are aware that such a polynomial exists, but they do not know its specific form. This constitutes the current key-sharing scheme’s ability to carry out the sharing process without knowing the aggregated key polynomial.

In simple terms, each participant acts as a Dealer to distribute secret key shares to other participants. The nodes then assemble the received shares to form new secret key shares, resulting in a decentralized scheme without a central Dealer.

2.2 TSS Process

According to the current popular signature algorithms, mainstream Threshold Signature Scheme (TSS) algorithms can be mainly categorized into two types: those based on ECDSA and those based on Schnorr.

Taking ECDSA as an example, let’s first understand a standard (involving only one party) signing process: Assume the message to be signed is “m” and its hash is “H(m)”. Let “G” represent the curve.

  • Generate a private key “x” and generate a public key “Q” as “Q = x * G” (Key Generation process).
  • Randomly select “k” from Z_p​ (generate a random blinding key, one-time use).
  • Calculate “R = k * G” (generate a blinding public key).
  • Calculate “r = r_x mod q”, where “r_x” is the x-coordinate of “R”.
  • Calculate “s = k^(-1) * (H(m) + r * x) mod q”. The resulting values of (r, s) are the ECDSA signature data.

The verification process only requires checking if

under the condition

where

is the x-coordinate of the computed point “R”.

In contrast to ECDSA, in Schnorr, the resulting signature is (R, s), where “R = k * G” is the blinding public key, and “s = k + hash(Q, R, m) * x mod q”, where “x” is the private key, “Q” is the public key, and “m” is the message to be signed. The verification process for Schnorr checks if “s * G = R + Hash(Q, R, m) * Q”.

In the signing phase of TSS, each participant independently signs their own share of the signature based on the obtained secret key shares. Finally, these individual signatures are combined to form the final threshold signature. Throughout this process, each signing operation involves a Distributed Key Generation (DKG) process to generate a secret value, often a random number “k”. The implementation details vary depending on the underlying digital signature algorithm used. For example, in the Schnorr algorithm, the secret value “k” that determines the signature “s” is linear, allowing for simple addition of secret value shares. On the other hand, ECDSA involves non-linear operations with two secret values, “k” and “x”. Thus, a different formula is defined to combine and distribute the secret value shares of “k” and “x” in a distributed manner.

In 2001, MacKenzie and Reiter proposed a Two-party ECDSA signature scheme in their paper titled “Two-Party Generation of DSA Signatures”. This paper laid the foundation for subsequent TSS schemes based on ECDSA, with further improvements in usability and security.

2.2.1 Calculating Public Q and R

The approach proposed by MacKenzie and Reiter is as follows: Participant P₁ randomly generates x₁ and k₁ (similar to standard ECDSA). Here, x₁ needs to be kept secret, while k₁ is a one-time-use value. Similarly, Participant P₂ randomly generates x₂ and k₂. The secret values x and k are computed as follows: k = k₁ * k₂ and x = x₁ * x₂. Since P₁ cannot disclose x₁ and k₁, and P₂ cannot disclose x₂ and k₂, the total secret values x and k cannot be obtained directly. Instead, a Diffie-Hellman key exchange process is utilized to derive the overall public key Q and the overall blinding public key R directly.

2.2.2 Computing Public s

Up until now, the concepts discussed have been relatively straightforward. However, the challenge arises when it comes to generating the public s in a threshold signature scheme based on ECDSA:

How can we compute s without P₁ revealing x₁ and k₁, and P₂ revealing x₂ and k₂? The paper employs the Paillier homomorphic encryption scheme, known as the Paillier Cryptosystem.

First, s is decomposed into partial shares:

Next, it is worth noting that only P₁ has knowledge of the red portion. This allows us to utilize the Paillier homomorphic encryption to encrypt these two parts separately and perform computations on the encrypted values. Finally, the Paillier private key of P₁ can be used to decrypt the final result.

The specific process is as follows:

2.2.3 Preventing Rogue Key Attacks

So far, the protocol processes we have discussed assume the honesty of the participating parties. However, in real-world scenarios, there is a possibility that P₁ could maliciously construct E_pk(k₁^(-1)) and E_pk(k₁^(-1)∙x₁) to obtain the information of P₂’s x₂ and k₂. This is known as a Rogue Key Attack.

To prevent such attacks, P₁ needs to prove, without revealing k₁ and x₁, that the encrypted data provided to P₂ does indeed include k₁^(-1) and x₁. Additionally, P₁ needs to demonstrate their knowledge of the private key corresponding to the Paillier public key.

Both of these proofs can be accomplished using zero-knowledge proofs. Typically, the Schnorr Proof protocol is employed for this purpose. For detailed information on zero-knowledge proofs, you can refer to previous articles on the subject:

https://www.numencyber.com/introduction-to-zero-knowledge-proofs-part-1/

https://www.numencyber.com/introduction-to-zero-knowledge-proof-part-2/

https://www.numencyber.com/introduction-to-zero-knowledge-proof-part-3/

Zero-knowledge proofs are crucial in this context to prevent serious security issues such as the complete disclosure of private keys. Here is an example that highlights the significance of zero-knowledge proofs in preventing private key leakage:

Furthermore, the process of transforming multiplication into addition using the Paillier algorithm, as described in the previous section, is known as the Multiplicative-to-Additive (MtA) process. In this process, P₁ needs to prove that

are indeed encrypted using the distributed Paillier private key and not constructed through deceptive operations.

This requirement arises from the fact that the Paillier encryption algorithm mandates that the plaintext to be encrypted must be smaller than the Paillier public key (the Paillier decryption algorithm involves a modulo operation, which limits the plaintext to the range of the Paillier operation space, also known as the order). If an attacker constructs large numbers to participate in the protocol, it can potentially lead to the complete disclosure of private keys. Relevant attacks can be found in the paper “Alpha-Rays: Key Extraction Attacks on Threshold ECDSA Implementations”.

To address this situation, Range Proofs from zero-knowledge proofs are introduced to demonstrate that the encrypted plaintext is within the range of the encryption space.

In blockchain applications, the values of k and x to be encrypted are typically within the range of the Secp256k1 space, with a length of 256 bits. However, the Paillier order is typically 2048 bits long. As a result, some TSS libraries used in blockchain may omit the Range Proof component.

It is important to note that certain new TSS algorithms do not utilize Paillier homomorphic encryption for the MtA process, allowing for the omission of the Range Proof. For instance, the CCLST algorithm employs the CL Scheme homomorphic encryption algorithm, which does not encounter the issue of plaintext exceeding the order, thereby eliminating the need for Range Proofs.

3. Introduction to Mainstream TSS Algorithms

The MacKenzie-Reiter algorithm, due to the complexity and computational cost of the zero-knowledge proof process during the DKG phase, has not met the practical engineering standards. However, in recent years, with the advancements in zero-knowledge proofs, new TSS algorithms have been proposed and widely applied in multi-signature wallet solutions within the blockchain domain. Below, we will introduce the TSS algorithms that are more widely used, categorized according to the underlying signature algorithm.

3.1 TSS based on ECDSA

3.1.1 Lindell17

In 2017, Yehuda Lindell (Chief Cryptographer at Coinbase) presented a faster two-party ECDSA scheme in the paper “Fast Secure Two-party ECDSA Signing.” This scheme introduces a random value ρ during the construction of s, which enhances security while reducing the cost of the zero-knowledge proof process:

In the domain of multi-party ECDSA, there is a complete open-source implementation of the Lindell 17 scheme available, which can be found at: https://github.com/ZenGo-X/gotham-city/blob/master/white-paper/white-paper.pdf. The library “blockchain-crypto-mpc” implements the addition form of the Lindell 17 scheme, which facilitates key resharing and also includes the Range Proof component in the MtA process.

Recently, OKX has also open-sourced their TSS library for their own wallet, which can be accessed at https://github.com/okx/threshold-lib. Based on specific business requirements, they utilize Feldman’s VSS to construct a {2,n} threshold signature algorithm based on the Lindell 17 scheme. However, it should be noted that the implementation of Range Proof was not found in this code repository.

3.1.2 GG18

Due to the practicality of the Lindell 17 scheme, many {t, n} signature schemes have been proposed based on it. In 2018, Rosario Gennaro (Professor at the City College of New York and Cryptographer at Protocol Labs) and Steven Goldfeder (CEO of Arbitrum) introduced a general {t, n} signature scheme in their paper “Fast Multiparty Threshold ECDSA with Fast Trustless Setup,” known as the GG18 scheme. The GG18 scheme provides a unified expression for both DSA and ECDSA and has become the first influential and practical ECDSA {t, n} threshold multi-signature protocol. Many subsequent threshold multi-signature protocols can be viewed as optimized versions from different perspectives of GG18.

Compared to two-party ECDSA signatures, the GG18 scheme involves four rounds in the DKG phase, including three rounds of key distribution and one round of public key verification. Each participant shares their secret value using the {t, n} Feldman-VSS method with all other participants. Then, based on Pedersen’s DKG, Schnorr proofs are used to ensure the consistency of public-private key pairs among the participants.

In the signature phase, the GG18 scheme requires a total of nine rounds, with two MtA processes between each pair of participants.

Some implementations of GG18, such as those by Binance and Safeheron, include Range Proofs, while others, like the implementation by ZenGo, do not incorporate Range Proofs.

3.1.3 GG20

Following GG18, Rosario Gennaro and Steven Goldfeder published a paper in 2020 titled “One Round Threshold ECDSA with Identifiable Abort,” known as GG20. It improves upon GG18 in the following aspects:

  1. Reduced number of signature rounds: The signature process in GG18 requires 9 rounds, while GG20 simplifies it to an offline pre-processing stage and a concise online signature stage, requiring only 7 rounds. Furthermore, the first 6 rounds of offline pre-processing are independent of the message to be signed, allowing them to be performed in advance.
  2. Identification of malicious participants: In GG18, if there are malicious participants, the protocol terminates, but it does not identify the specific malicious participants. In GG20, a commitment form with Identifiable Abort is introduced, enabling the identification of responsibility in case of failure. This helps prevent continuous malicious behaviour from causing a DDoS-like scenario in the protocol.

Libraries that have implemented the GG20 protocol include:

It is worth noting that ZenGo’s code repository does not include Range Proof implementations specifically for GG18 and GG20. However, in their Lindell 17 code repository, Range Proofs are implemented. On the other hand, the other two mentioned repositories provide Range Proof implementations for the MtA process.

3.2 Schnorr-based TSS

Compared to ECDSA-based threshold signature schemes, the development of Schnorr-based threshold signature schemes has been relatively recent. The most advanced scheme in this category is FROST (Flexible Round-Optimized Schnorr Threshold Signatures), introduced by Chelsea Komlo and Ian Goldberg in their 2020 paper.

The DKG (Distributed Key Generation) process in FROST is similar to that of ECDSA-based protocols. However, the signature process is simpler and consists of only two rounds. The first round, which doesn’t involve the message being signed, can be performed in advance. By examining the standard Schnorr signature, it becomes evident that the final signature (R, s) is linearly additive with respect to the secret values x_i and k_i.

As a result, the final signature can be obtained by adding the signatures of the t participating parties. This eliminates the need for homomorphic encryption and the Multiplicative-to-Additive (MtA) transformation, thus reducing the computational overhead associated with homomorphic operations and Range Proofs.

The FROST signature scheme offers several advantages. It is considered the simplest and most efficient solution for threshold signatures, without significant security concerns. However, one drawback is that each node must maintain a Merkle Tree, and the required data storage increases with the size of the threshold {t, n}.

The limited adoption of FROST signatures is primarily due to their current usage in Bitcoin after the Taproot upgrade and a few non-mainstream blockchain projects. They are not yet compatible with more widely used ECDSA signature scenarios.

Zcash has developed an open-source code repository for FROST, which can be found at: https://github.com/ZcashFoundation/frost. This algorithm allows for the detection of malicious participants.

4. Recommendations for Ledger Key Recovery

Based on the understanding of the aforementioned TSS algorithms, it can be inferred from Ledger’s official description that Ledger Key Recovery likely utilizes a {2,3} secret sharing scheme. The user acts as a Dealer and distributes private key fragments to three companies through secret sharing.

To maintain the experience of using a single private key for signing, users continue to use single-key signing in their regular operations. Only when the private key is lost, do they request two out of the three companies to recover the key.

However, there are security risks associated with this approach:

  • If two out of the three companies collude, they can restore the user’s private key without authorization.
  • There is a risk of interception when the user distributes the private key fragments as the Dealer.
  • The algorithm used is not open source, making it difficult to assess potential security vulnerabilities.
  • There is a risk of KYC information leakage.

Given the current situation, we recommend users to exercise caution when using this feature. As a hardware wallet provider, it is advisable to minimize network information transmission during user interactions.

Regarding MPC solutions, it would be beneficial to employ audited open-source solutions that guide users to perform local backups of private key fragments or allow users to choose trusted service providers for private key fragment backups.

5. Final Thoughts

In this article, we have discussed the core technology principle of MPC wallets, known as Threshold Signature Scheme (TSS), and its potential benefits in terms of lowering user barriers and enhancing security.

While TSS can be a valuable tool, it is important to highlight that its implementation should be approached with caution. Issues such as the absence of zero-knowledge proofs or improper storage of private key fragments can still pose risks, leading to private key leakage and potential asset loss.

To ensure the utmost security, it is advisable for users to select MPC wallets that prioritize transparency and have undergone rigorous security audits. Wallet projects should also prioritize the security of TSS code and choose the most suitable TSS algorithm for their specific use cases.

Additionally, Numen Cyber has established itself as a disaster recovery shard storage provider for the Cobo MPC custody solution. As a trusted third party, we assist users in securely storing their disaster recovery private key fragments generated on the Cobo platform. Our experience in the B2B custody field has resulted in numerous successful cases.

We are also actively seeking partnerships with C2C MPC wallet providers, and aim to contribute to a secure and efficient wallet ecosystem through our expertise in security and disaster recovery capabilities.

References

https://aandds.com/blog/multiparty-threshold-ecdsa.html
https://aandds.com/blog/two-party-ecdsa.html

--

--

Numen Cyber Labs
Numen Cyber Labs

Numen Cyber Technology is a Cybersecurity vendor and solution provider based in Singapore.We dedicate ourselves in Web3 Security and Threat Detection & Response