Disintermediation is the removal of the middleman between a buyer and seller of a transaction, or between the source and recipient of information.
Decentralization is the process by which the activities of an organization, particularly those regarding planning and decision making, are distributed or delegated away from a central, authoritative location.
So much of the disruption in the last 20 years has revolved around an attempt to topple down bloated, inefficient central intermediaries. From the days of going to your local travel agent to book your next vacation, to the likes of booking.com, evolving to search engines that take you directly to airlines having found the best combination of fares. However, most people would argue that what we do generally is remove an old-fashioned, expensive intermediary in return for a more nimble, technology-powered intermediary.
Enter decentralization — the true embodiment of disintermediation. Running independent nodes that are incentivized to run standardized computations, and in a way that can be verified across n number of nodes, is a great way to ensure that no one ‘party’ runs the system. Clearly, there are compelling cases where this would be an improvement to an environment where trust of the central party is eroding — currency being the most obvious, but data storage, video hosting, even air traffic control are others where there are also strong arguments in their favor.
But, as we know, there are trade offs with decentralization; speed, scalability, efficiency, regulatory risks, and in many cases, privacy, all take a hit in the pursuit of disintermediating. So, can we achieve our goal of disintermediation without the trade-offs of decentralization?
Traditional analysis of disintermediation and decentralization within crypto has centered around a firm assumption — to achieve disintermediation, one must first be decentralized. Disintermediation requires the removal of a middleman, and hence centralizing a solution for that has largely seemed a paradox. Decentralization ensures no one authority can have decision making power, and decisions are made by getting consensus across different independent nodes. So the choice has seemingly been:
- Having multiple ‘two-party’ channels of activity (e.g. A⟷B, A⟷C, B⟷C etc) i.e. everything bilateral. This removes the need for the central intermediary, but involves forgoing ‘portfolio effects’ of centralization (a necessity in the trading business). For example, when trading firms are looking to execute derivatives trades, they may not get the best price or the best liquidity across counterparties, and if they do offsetting trades across B, C and D, they don’t get any sort of collateral offset. Even decentralization still achieves bilateral channels of transactions, and a limitation on portfolio effects. For example, a Decentralized Exchange (DEX) may theoretically be able to aggregate liquidity and price whilst keeping trades bilateral, however, I still cannot get offsets across B, C and D without revealing my private trade data (or my portfolio) to the world, or to some central intermediary (the first would be undesirable, the second defeating the point of decentralization).
- Centralization of liquidity and information. A trusted central person can hold the data and information of everyone, pool liquidity across multiple venues (e.g. a centralized exchange), and act as the party that will backstop all transactions, taking responsibility for the prices that are shown.
One thing to note; being decentralized does not guarantee disintermediation. For example, a DEX that acts as the smart contract owner and sole source of liquidity isn’t a great example of disintermediation (e.g. Bancor, according to historical analysis, or ‘Deg 1’ in the varying degrees of decentralization within ‘DeFi’). Outside of holding your own keys to your cryptocurrency, your trades are passing through one main intermediary when you do trade. This is an example of decentralization without disintermediation. See the below chart (Source: Liquality)
Zero Knowledge technology challenges the assumption that disintermediation can only be achieved by decentralization. It removes the need for a middleman without needing to have decision making powers scattered across multiple parties. Zero Knowledge computation can be thought of as a black box, or blind worker, that’s customizable by parties ‘talking’ to each other through it, and where decisions and computations can be achieved based on rules decided bilaterally. Essentially, decisions can be made programmatically without having to either trust each other, or a central party, but also without having to use a decentralized architecture.
Centralization has significant strengths, like efficiency, speed, aggregation of liquidity and resources. In financial markets, we tend to get more deep and liquid markets, secure trading, with less dependency on the financial health of your counterparties (it’s why since Lehman, we have gone to a more central clearing model to reduce systemic risk of liquidations). In credit scoring, centralization of data has proven to be a hugely efficient way of getting and analyzing it, although it’s already becoming apparent there are huge risks with regards to privacy and security when holding such sensitive data.
This isn’t to say that decentralization is undesirable — sometimes, we really don’t want to trust anything or anyone. Speed and privacy perhaps aren’t the priority, the rules don’t need to be customized per transaction, and we want an immutable ledger that stores decisions made by truly independent nodes. Bitcoin is a great example for this — creating a new type of ‘store of value’ that trusts no one person or entity.
The point, however, is that there are cases where there isn’t a pressing need for decentralization, an immutable ledger or tokens. In these cases, we care greatly about the improvements that centralization can bring, but are looking to mitigate some of the inefficiencies associated with having a middleman. Theoretically, we could have a programmatic solution that allows two parties to specify and execute exactly what they are looking to do with each other. However, often, the main reason there’s an intermediary is to act as the gatekeeper of crucial private data that they’re not looking to share with the other side.
For example, let’s look at what a credit agency does:
The credit scoring agency here is involved in collating Bob’s private data and doing the required calculations, before passing the computed output onto the relevant parties so they can make their decision. Whilst credit agencies do this programmatically, we still have to send that data to someone, and hence we trust the credit agency with collating, calculating and storing all the data and sending relevant calculations to the appropriate parties.
With zero knowledge computation, we can do the same calculations on encrypted data, so no one has to see that private data — lenders can query it directly from Bob’s encrypted repository of data. We can push that (encrypted) data through one black box , and the lenders get the output of those calculations, all without compromising Bob’s privacy.
Because there is no longer a middleman who would be responsible for the calculations, we have more flexibility; each lender is able to specify the precise credit scoring methodology they’d like to use and receive a credit scoring on Bob (e.g. f(x)₁ = Equifax methodology). Lenders and creditors successfully get the calculations they need without a central intermediary, or without Bob revealing private data to anyone.
Clearly this is a very simplistic look at credit scoring, we know there are more layers involved. However, it illustrates a simple point; we can disintermediate without needing to decentralize.
Zero knowledge calculations, for the record, don’t have to be centralized (see zk-SNARKs, Enigma protocol etc). We can have multiple zero knowledge calculators decentralized and using some privacy enabled blockchain. Equally, we can have a centralized consensus of multiple zero knowledge calculators, or even more blasphemous, we can have just one zero knowledge calculator. Our point is, we can disintermediate with a simple black box as above, remove the need to trust a middleman, and all without needing to decentralize. In our example, lenders are verifying Bob directly and they’re specifying the methodology used, Bob’s keeping his data private, and there’s no all-powering, all-knowing intermediary in between (i.e. relationship still bilateral).
X-Margin: Disintermediating derivatives clearing, margin & settlement
Traditional clearing houses operate as the central intermediary to derivatives trades. The clearing house enters the picture after a buyer and seller have executed a trade. Clearing houses take the opposite position of each side of a trade which greatly reduces the cost and risk of settling multiple transactions across multiple parties. In acting as the middleman, a clearing house provides security and efficiency. Regardless of the liquidity of my counterparties, I (hopefully) know that the behemoth sitting in the middle will be there to honor and settle any outstanding trades I have, whilst keeping my position details secret and secure from other trading firms and hackers. Centralization, in this case, makes things more efficient. As a trader, I want to trade with multiple people, but I want to use one pot of collateral, I don’t want to prove my liquidity and credit-worthiness at all times to all those people, and also don’t want to monitor that for all my counterparties, and then settle across all these people one by one. Bilateral trading is clearly a slower, more inefficient way of trading that introduces a great deal of credit risk.
Central Clearing > Bilateral Trading
Let’s say Trader Joe trades with 4 different people (assuming he is bored of running a conglomerate of grocery stores and is keen to get into the derivatives game):
The clearing house:
- Sees and evaluates all of Joe’s trades and positions
- Calculates (using pre-agreed methodology) Joe’s overall risk and margin exposure — i.e. taking into account trades with Dave, and then with Amy etc, and netting out the exposure across all of them. This is a massive efficiency gain. If we were to trade bilaterally, i.e. not involving a clearing house, Joe has some risk with Dave, and needs to post margin accordingly with Dave, and then the same with Amy, the same with Bob etc. A clearing house removes that friction by taking into account the overall position
- Ensures Joe has posted enough margin to cover all his current trades
- Settles all of Joe’s trades across all the different counterparties e.g. if Joe owes Amy $1m, and Amy owes Dave $2m and Dave owes Joe $1m, then Joe need not pay anything — the clearing house would just ensure that Amy pays Dave $1m.
- Is the central counterparty to Joe’s trades. Whilst Joe may go and negotiate a trade with Amy, his trade is actually versus the clearing house. If Amy goes bankrupt, Joe doesn’t care (partly because he is a heartless sod), he only cares if the clearing house is sufficiently capitalized and liquid.
[note that the above structure holds true for Amy, Dave, Bob & Jim too. The clearing house does the same for all its users]
All of the above, outside of being the central counterparty, boil down to one main property; knowing everyone’s private trade data. The clearing house knows everyone’s risk and capitalization at all times, how much they owe to whom, and how much credit each trader has left to use.
So, it’d be fair to ask; what’s wrong with this model? Clearly, when it works, it works really well — the London Clearing House cleared over $1quadrillion dollars of volume last year. And, for the user, the benefits are clear. If you want to trade liquid, standardized products, central clearing provides capital efficiency, security and easy settlement.
However, from the clearing house’s perspective:
- It is very capital intensive. To clear large amounts of volume, the clearing house needs capital to be the backstop to every trade
- It requires the clearing house to be regulated — being a designated clearing house (DCO) requires a great deal of oversight and onerous regulatory filings
- Involves credit/position risk — if a user’s position goes adversely against them, the clearing house is the central counterparty, so when it takes over the position, there’s a chance it doesn’t recover enough funds from the bankrupt trading firm, or it is unable to liquidate their own positions quickly enough. Therefore, usually, centrally cleared products are very liquid and relatively high volume
- It’s harder to scale — the central clearer needs to approve any new asset class it trades, needs to approve risk methodology used, mark-to-market mechanism, settlement method etc, as they are the central counterparty. A huge proportion of derivatives products therefore trade bilaterally, as there is no central clearer qualified and willing to provide central clearing for markets that are too small, too risky, too bespoke or too illiquid. This means, for the user, that’s a smaller suite of products that they can trade.
And so, for the user, all of the above boils down to:
- central clearing being a more expensive & less flexible way of trading relative to bilateral trading. It’s often worth the cost to trading firms if it’s available, largely to get capital efficiency and automated settlement. However, clearly, anything that would greatly lower these costs and inefficiencies would be desirable
So, let’s examine how X-Margin can offer the same service in a more efficient way:
With X-Margin, Trader Joe can trade literally any product, provided he has bilateral agreements with each counterparty, specified settlement terms, mark-to-market methodologies, and margin calculation methods. X-Margin works as a calculation agent in the bilateral agreement, administering those methodologies in a zero-knowledge way, effectively allowing Trader Joe to offer cross-margin to Trader Jim, and vice versa, but without each of them needing to reveal their private positions with other counterparties. Custodians and banks simply receive instructions from X-Margin, and agree to lock away margin and settle funds accordingly.
Therefore, from the perspective of the X-Margin business model versus being a traditional clearing house:
- Trades remain bilateral, no need to be a central counterparty
- Scalable to any asset class, any margin methodology, any settlement methodology
- No credit risk — trades are liquidated instantly when margin usage is exceeding limits
- No capital required by X-Margin — X-Margin is not the central counterparty and so requires no capital to facilitate capital efficiency and automated settlement
- No need for regulatory oversight — X-Margin is a zero knowledge calculation agent, simply applying rules agreed upon bilaterally. X-Margin does not need to hold client funds, it is not a matching engine and does not see its clients’ trades or positions. [X-Margin only receives encrypted position files, and never learns the traders’ positions]
- Extremely low cost — running costs, risk management costs and regulatory costs are significantly lower
So, naturally from a user’s perspective:
- Significantly cheaper to use than central clearing, no need to pay for an expensive middleman
- Much more flexibility than central clearing — users can now trade any asset class, any instrument, any margin methodology, and get the collateral benefits as if trading through a central clearer.
- Most impressively, the client chooses his or her own custodian — Trader Joe doesn’t need to park funds with bilateral counterparties, nor with a central clearer. He can pick his own custodian or bank (or multiple custodians and banks) and keep his funds there.
As you can see, there are some huge efficiency gains by disintermediating and using X-Margin’s margin calculation service. We can get nearly all the benefits of centralization without the inefficiency and expense of an intermediary.
Why do we say ‘nearly’? In the above version of X-Margin, we significantly improve the user experience relative to bilateral trading:
- automated settlement across counterparties and venues
- cross-margin and netting of risk across counterparties
- negated credit risk towards counterparty
- collateral stored in custodian of my choice
However, due to trades being bilateral, when one party goes bankrupt, the other side to that trade also loses their (probably profitable) position. This is standard in bilateral trading, but undesirable relative to central clearing.
Three’s not a crowd
X-Margin can theoretically be extended from bilateral, to trilateral, or multilateral agreements, where multiple large institutions are able to bid for bankrupt positions at a discount, only requiring to use collateral when a counterparty is close to bankruptcy, and they are able to choose which instruments and asset classes they can provide a backstop for. To traders, the user experience suddenly looks a lot like central clearing, but without the cost and inefficiency of having one central clearing house.
This means bilateral and trilateral trading can run simultaneously within the same ecosystem. Bespoke and risky instruments can be traded bilaterally with automated settlement and cross-margin, and liquid, high-volume instruments can trade in a trilateral fashion. In essence we now get all the efficiency and cost benefits of X-Margin and disintermediation, combined with all the benefits of central clearing and centralization.
X-Margin > Central Clearing > Bilateral Trading
Getting back to the point
There is a subset of industries where central intermediaries are providing services that boil down to simply knowing your private data, and sending computations of that data to the relevant people. In those cases, zero knowledge technology can be harnessed to achieve efficiency through disintermediation, without the privacy, security, scalability and performance trade-offs that often need to be made for decentralized technology.