The cost of not trusting; thoughts on governance

I want to preface this post by crediting Dan Ariely for sparking my thinking in this topic during a conversation we had.

The crypto world promotes “trustless” as an ideal, and this is not surprising given Satoshi’s white paper deems the elimination of the need to trust (specifically in a third party) as a core value proposition of bitcoin. Applying quantum thinking to this topic, I would like to challenge the idea that we can completely remove trust from the equation and propose that instead blockchain technologies should be used to enable more trust between counter parties.

Taking an economist’s perspective, trust is actually essential for growing the economy. In absence of trust, many value-creating transactions wouldn’t happen.

A simple example this: I am selling a t-shirt online to you. However, you don’t trust that I will send it to you after you send me payment, and I don’t trust you enough to send you the shirt before you pay; therefore, this transaction never takes place and the market for t-shirts was not established. Then comes along an eCommerce platforms to facilitate this trust by holding funds in escrow until verification of receipt and promising to take recourse on both seller and buyer if either misbehaves. Now we make the transaction, but the platform takes a cut for establishing the market. However, now all participants have to trust the platform will do the right thing, and they suck significant value out of the ecosystem. This is an idea that the crypto community is increasingly uncomfortable with. Now, blockchain comes along and allows us to program the verification of receipt and escrow into the smart contract such that we can transact without the need for trust in a third party. The elegant technological solution can help establish many new markets and even allow certain economies to leap frog financial systems where there hasn’t been a credible third-party to establish societal trust.

The idea is that participants in blockchain networks so fully trust the technology that they implicitly trust the entire network, and this could be thought of as an evolution of trust in the most ideal case. However, in reality, especially at this stage of the technology’s development, we cannot expect the code to fully account for all edge cases, and this is what worries me. Abiding too strictly to the trust-less ideal allows network participants to offload personal (or community) responsibility for unethical behavior because it is supposedly “programmed into the code”. This type of “code is law” thinking puts us in a position where everyone is expected to be cold-hearted and rational, and exploitation is the norm. As such, the trust and ethics governing the edge cases dissolves altogether. When we’re talking about just software, the impact is limited, but since we are designing full economies from scratch in blockchain tech, this is a dangerous precedent to set for both the technology’s and our society’s future. Social incentives play such a major role in governing our behavior today, and we can’t possibly account for them all, so when we try to replace them with economic systems, we may completely fail to appropriately incentivize certain behaviors that play a critical role in the system or appropriately disincentivize behaviors that could cause the system’s collapse. Furthermore, behavioral economists have also found that when you introduce a financial incentive, sometimes the social incentive falls apart altogether and cannot be reestablished.

In the above example of the t-shirt market, though it appears there is no trust required in this transaction, there are always edge cases. What if the shipment gets lost, who is responsible? What if the t-shirt contained a component that was hazardous to your health, but the seller didn’t know about it? What if he/she did? What if the buyer received the shipment but claims he/she didn’t? Since there is no central party to hold anyone accountable or absorb the risk, this transaction actually requires a great deal of trust. You need to believe either the person you’re trading with will suffer mental discomfort for screwing you over or understand that they are negatively impacted by doing so because they are corrupting the entire ecosystem by decreasing trust in the system. A physical goods example is perhaps imperfect, and some of this translated to digital transactions can probably be solved by code and government regulation, but my point is that there are always unforeseen edge cases, and they can’t always be predicted and must be resolved over trust.

What makes me optimistic, however, is that from the economist’s perspective, the virtuous (or trusted) participant also has a competitive advantage in the market because they are likely to attract more transactions because dealing with lack of trust can be costly and complex. Some economic historians argue that one of the reasons the Quakers played such an important role in the British economy in the 18th century is because they were known to be trustworthy. A few virtuous actors can anchor a market because others will try to mimic them in order to also win business. There have been tons of studies done on reputation games in economic literature, and the gist is basically if we enable a transparent reputation system over time, all players have an incentive to behave, virtuous or not.

I believe that technology’s impact on society is so greatly influenced by how we choose to adopt it, and some of the decisions we make right now are fundamental to how and how much crypto will change society in the future. As the technology prepares for the mass market, it will be key to build more trust into these networks, trust that we all will act with integrity where lines are blurry and code is broken. This will actually speed up adoption.