Multi-Party Computation powers Enterprise-grade Wallet Design
MPC encodes complex, customizable business logic at the application layer
Multi-Party Computation is hardly a household name today, but as sophisticated encryption and robust information sharing platforms become widespread, MPC could well power a quantum leap for the IT industry. It facilitates game-changing modular capabilities needed for sophisticated identity wallets, multi-party signature and approval schemes, usable key management, and resilient governance structures. While encrypting data “at rest” and “in transit” have been orthodox security measures for decades, it has traditionally been decrypted in memory for computations and transformations; MPC’s emerging real-world utility lies in situations where these operations can be run on encrypted inputs rather than decrypted one, so that the source information is not even exposed to the computation’s immediate context, nor to the various system users and participants in a multi-party computation, pushing the envelope of encryption-based privacy.
At Spherity we are using MPC key management for our Cloud-egde SaaS Identity Wallet solution.
In this brief overview, we’ll explain how it works before explaining the three main use cases we see for it in the kinds of cloud identity wallets we build for our current and future clients.
The Basics of Multi-Party Computation
[Note: if your memory of cryptography fundamentals is a little rusty or cloudy, it might help to review our layperson’s introduction first.]
Multi-Party Computation is a complex form of encryption that, as the name would imply, involves multiple parties with their own private keys and encrypted data. It evolved out of the decades-old and evocatively named “mental poker” family of cryptographic puzzles, which involve establishing trusted processes over long distances without having to trust intermediaries (sound familiar?).
The original application of these methods and structures involved querying or transforming data from multiple parties without exposing individual data points or datasets. Imagine, for example, a magical black box that discreetly adds up the ages of everyone in a crowded room without anyone learning any of their neighbors’ exact ages. The more people are in the room, the more each person’s privacy is preserved. This example might sound a little frivolous, but in the context of an identity system shared by competing supply chain actors, or a database of sensitive medical information which no one should be able to see even in “debug mode” or “god mode”, the utility is more clear to see.
Another technical term worth learning here is “key sharding,” sometimes abbreviated to “sharding” when the context is clearly the sharding of encryption keys and not of databases or of file systems. Sharding involves breaking up a private key or other important secrets expressable as strings into smaller pieces called “shards” or “shares”, which are individually useless but can recombine to compose the original if enough pieces are present.
This “threshold quorum” varies by the size of the original and the shards, or other adjustable variables. Since all these numbers and their proportions can be adjusted, this method is sometimes referred to as “N of M” signatures, requiring N or more of the M total shards into which the original key was divided. This is very similar to Shamir Secret Sharing, often called “Shamir” for short (after its founder, Adi Shamir). However, Shamir Sharing requires an original key to be recomposed or recalculated from its shards, and applied to sign or decrypt data in a separate step, whereas in MPC the signing or decryption operation cannot be intercepted to display the key, which is never recomposed even in process memory. This additional layer of security is why MPC has garnered so much interest from the cryptocurrency community, where the stakes of information security are paramount.
Use Case 1: Sharding for digital asset custody & key management
The application of this technology that has gotten the most usage since it was matured, battle-tested, and commercialized has been in the cryptocurrency space, to offer semi-custodial and key recovery services for so-called “digital assets”. Technically, a great variety of things could be considered digital assets, but usually people use this term to refer specifically to ownable digital assets controlled by some form of private key or other “secret”. Similarly, “digital asset custody services” almost always refers to custody of the private key/secret controlling the assets. In many ways, the terminology and the business practices are largely inherited from older custody practices traditional to the banking sector.
The best way to understand how shards and thresholds can secure an asset is by the classic analogy of a “safe deposit box:” in this 19th century technology, a very small storage space would be rented inside a vault, which only opens when two or more different keys are inserted into two or more different locks simultaneously. One of these is traditionally held by the rentee and one is held by the bank, usually a trusted, senior employee particularly qualified to operate discreetly on behalf of the bank itself. Depending on the bank and its legal jurisdiction, a third key might or might not exist, and be held by regulators, another bank or another branch of the same bank, or even by law enforcement or national security actors. These additional keys might be reserved for emergencies, for replacement, for cases of fee nonpayment, etc.
Extending this analogy to the digital world, there are a wide range of configurations possible for these kind of thresholds and distributions; indeed, one bank or custodial service might distribute a different number of keys to different stakeholders in each jurisdiction or even for configure each client’s custody differently. Cryptocurrency exchanges, such as Shanghai-based Binance.com and KeyFi, use MPC to offer customers guarantees on their custodial and key-recovery solutions, and even for complex low-trust staking schemes.
It is worth emphasizing here that the private keys that control identities in decentralized identity systems are, in almost all cases, identical, technically and mathematically speaking, to the keys that control cryptocurrencies. They are stored in the same formats and generated by the same cryptographic primitives, with the same cybersecurity risks and governed by the same engineering standards. The Crypto-Asset Security Platform (“CASP”) libraries that the Israel-based cryptography firm Unbound developed for cryptocurrency exchanges are actually essential to all the MPC capabilities of Spherity’s identity wallet. These allow similar configurations for custodial and distributed key management, which secures stored keys even in the exceedingly rare case of compromised cloud security or data integrity.
Another adjacent, related field where private key distribution and management is of pivotal importance is software versioning and codebase verification. Systems like git rely on verification signatures generated along an audit trail by secret private keys to confirm software’s exact evolution, authorship and tamperproof distribution. Much as hardware is often uniquely identified to tamperproof its firmware in so-called Hardware Security Modules (HSMs), so too can sharded keys be used to verify authenticity of software in so-called “Virtual HSMs”, Secure Enclaves, and other self-validating entities within a complex topography.
Use Case 2: Advanced Key Management & Business Logic
Of course, the most powerful thing about CASP-based key sharding is that its end-users don’t have to know anything about it to use it. Not just the complex mathematics but even the multiple moving parts can be kept “under the hood,” so to speak, with end-users blissfully ignorant of how much complexity is entailed when they press the button for “customer support” upon losing access to their identity. The shard threshold is an esoteric detail from the end-user’s point of view: they just want to know if key recovery involves a string or key kept in cold storage, or if it requires multiple third parties to sign off and log the recovery.
The flexibility of defining N and M specifically for each system and use case allows a plethora of key management options beyond the classic hot/cold and custodial/non-custodial decisions. One account can be controlled by any 2 of 5 keys, to enable two-factor authentication (say, a mobile phone biometric and a password, or a unique hardware device like a keycard and the approval of a system moderator).
High-security or high-stakes transactions, for example, might need to be signed by at least one internal and at least one external identity to protect against certain kinds of cyber attacks. They might be approved but not “ratified” until approved by a supervisor and processed the next business day. This “ratification” might be accomplished by an automated algorithm which also factors in security or behavioral trust metric, which in so doing adds a fourth signature and supplemental diagnostic/security data to the audit trail.
The degree to which business logic can be abstracted and integrated in the application layer goes further than just identity management and fingerprinting of business transactions. It can automatically create audits trails for many oversight structures, data cleaning, business processes, and data management more generally. Sharding and MPC can be applied not only to the keys that control identities (i.e. DIDs), but also to the keys that issue and access Verified Credentials (VCs). They can also automatically wrap various kinds of data in VCs to make internal processes more auditable and to make more traceable the “identities” involved, including nonhuman identities like software versions and self-training algorithms.
One key application here allows multiple identities to come together in the process of credential issuance. For instance, a single, highly-public institutional identity might be the easily-verified signature of record on all certificates. Take, for example, diplomas, signed by an institution and issued in the more traceable, granular sense by a specific staffed agency called a university “registrar”. In practice, diplomas might need to be processed by any current employee of the registrar’s department, and double-checked by a dean or another authority at a later date before its official issuance. All of these specific individual identities could be registered in the
digital “fine print” of the diploma, in metadata, or elsewhere in the audit trail, without compromising the straight-forward verifiability of the diploma itself as authentically issued by that university. Whether or not this audit trail is encoded publically on the VC or elsewhere in the system as a kind of logging by-product is an implementation choice like any other; MPC enables both.
Use Case 3: Querying and aggregating encrypted, private data
Our last use case brings us back to the original case of the “mental poker” or the example of the total age of everyone in a crowded room, but in a more 21st century form. Traditional encrypted databases are secured at the level of access rights to entire tables or databases, since they can only query data when they can see all of it; MPC-encrypted databases can be queried without the querying user having access to the individual underlying records, which can be encrypted to separate keys.
MPC can thus distinguish between “querying rights” but not access rights, with the former made conditional or “thresholded” to prevent the underlying data being queried narrowly enough to endanger the privacy of individual records. In fact, not only can queries be “thresholded” to observe a minimum sample set, these query rights can also be attenuated or managed in other ways, such as assigning them to transferable or session-specific tokens enabling their bearer to query (without accessing) an encrypted data source. MPC operates “secure computation” on these encrypted databases.
The power of this kind of privacy protection has many applications and use cases, but one that highlights the “horizontal privacy” that is so essential to supply chains and marketplaces is inventory management. In retail, there is a tiered vertical structure (manufacturer, supplier, manufacturer, wholesaler, retailer), and horizontal competition among peers at each level, so how can a brand study the system as whole without forcing actors to reveal information that could be leaked to their competitors? Take for example, the retail layer: stores that compete for customers and sales might not want their direct competitors to know how quickly they are selling a given product or a given batch or model of that product, or at what price.
However, many business processes in the brand itself might require it to run global sales analytics in realtime, querying in aggregate how well a given product is selling, or how those sales rates correspond to individual stores exercising their rights to confidentially discount or bundle the products. Individual stores might be able to opt in or out of “horizontal queries,” or grant one another tokens to query their inventories under certain conditions. These kinds of technologies combine search and discovery functions with the complex privacies required by modern supply chains. These examples show how access can be attenuated and granular to preserve more or less horizontal privacy while safely aggregating and abstracting analytics at a higher level.
This has countless real-world use-cases, and those use-cases proliferate steadily as more and more data comes to be considered private or personal and thus worth encrypting. Indeed, as GDPR comes into full force and a wave of similar laws are passed worldwide, the incentives are shifting. Amassing “lakes” of data that can be correlated or de-anonymized has rapidly gone out of fashion, as too much faith in imperfect “anonymization” techniques can mask a massive fiscal liability. Enterprises are increasingly researching higher standards of anonymization, and simultaneously seeking ways to make data harder to exfiltrate.
But the amass data in a way resilient to exfiltration is to both fragment it cryptographically, and to distribute the keys needed to recompose and access it as widely as possible. To be clear, using a unique key for every row in a database and storing those keys in another database visible to anyone with the same rights as the first makes no difference: only a trivial amount of friction is added by storing these keys elsewhere in the same system. Instead, the only convincing strategy to avoid these rapidly-growing liabilities is for unique keys, specific to each data subject’s data, to be held by the data subjects themselves. MPC offers one such way of limiting access to aggregate data, such that it can only be queried by internal agents, permissioned in advance and only for secure computation at a given scale.
Medical records, particularly records about clinical trials, are one particularly urgent example where individual privacy needs to be preserved, yet aggregate and fully anonymized access to that data can advance research and save lives, particularly if there is an ethical and privacy-preserving way to link it to metadata about the subjects of that data. Of course, the devil is in the details, and engineering the tradeoffs and thresholds is still a fine art that will take decades to refine, even with all the additional capabilities opened up by secure computation and correlation-resistant identity systems.
Machine learning algorithms that refine themselves and find new criteria and external data to bring to a data set with each iteration of analysis would be particularly powerful in this regard, since these new queries might access different cross-sections of the data or process it differently, requiring the maximum open-endedness and access. Designing data systems to preserve privacy and cybersecurity while allowing this kind of linkability to external data requires consent to be both granular and open-ended. Ultimately, this kind of engineering of consent requires as much legal and governance research as technical research; but it is a fascinating frontier for humanity, for which the groundwork must be laid now.
Further work: tomorrow’s capabilities and today’s design processes
The evolution of advanced key management systems and the choice of exactly where to best execute multi-signature logic in the software is an open question being explored and debated across the industry. Another major philosophical debate concerns exactly how privacy-preserving operations can be logged and what this metadata logging could contribute to audit trails.
But wherever a given vendor or enterprise lands on these questions, the resulting MPC capabilities are superpowers that will prove essential to the usability hurdles and adoption timelines of decentralized identity. We believe these capabilities will “sweeten the deal” to unlock digital-only and digital-first businesses, paving the way for 21st century governance for all of society. We are excited to be members of the MPC Alliance and to be in dialogue with the greater SSI community about where MPC fits into the conversations about security, usability, and governance.
But while MPC opens up a lot of exciting research topics and uncharted waters on the side of identity and governance, it is also a stable and road-tested technology today thanks to its early and prioritized adoption in cryptocurrency use cases. We have been using Unbound’s MPC libraries in our custom identity wallets for over a year, and it will continue to be a core capability of our custom implementations and whitelabel products in the future.
If you have any questions about where multi-signature logic could empower your use-case, feel free to reach out with any question and set up a demo. You can also follow us on LinkedIn or or sign up for our newsletter.