Trust : Trusted Third Parties (TTPs), Human Nature & Conflicts of Interest
Binge watching Stranger Things over the weekend, I couldn’t help but reminisce about the thrills of biking through the American suburbs as a kid. Save for knocking on doors, the whole town was a playground where we could roam freely as everyone knew each other. Those were such fond memories except for the fact that I was born and raised in a Malaysian city where cycling meant braving not only the immense distances and not-so-friendly stray dogs but also getting lost in a sea of strangers with Google Maps still a few years away. My social anxiety dictates that if the latter was to happen, I would have been as good as dead.
In all seriousness, implicit in all depictions of such communities is that trust in the community is based not only on reputation but also the simplicity of smaller communities. If I wanted to do anything, be it buying something or even just going for a walk, I would have to come face-to-face with what would very quickly become familiar faces. The chances of any individual committing fraud is greatly minimized as, short of uprooting himself and leaving behind all his possessions, that individual would be labelled as an outcast for life. In this case, without an explicit system, the role of the TTP is effectively carried out by the community itself through a reputation system.
This is distinctly not the case for many of us who grew up in cities where the chances of recognizing a familiar face shrinks disproportionately with population growth. As population grows, it is not just the ratio of strangers to recognizable faces in a crowd increases but more importantly, our human brains are not able to keep up with the overload of faces. Even if we could, time would be a limiting factor when it comes to getting to know the stories behind these faces. Consequently, we tend to tune out the crowd and keep to our own social circle. As such, the ability to “get lost in a crowd” provides an ideal environment for malicious actors to operate. Cities would cease to exist if there wasn’t a way to hold such actors accountable. This is where the role of TTPs becomes indispensable as individuals were assured their individual rights and possessions via keeping track of a few institutions (government, banks, businesses) instead of being overloaded with personal information.
It is important to note that although reputation correlates very closely with trust, it is the subtle differences which makes a world of difference when discussing trusted technology. These differences are even more pronounced the larger the network as trust in the individual gets increasingly displaced by trust in amorphous systems. This paper by Olnes provides a good distinction between the two types of trust and about trust systems in general. They are as follows:
- Technical trust is one where individuals are assured that the system works as anticipated (reliability), is protected against attacks (security), and protects the interests of the user (safety)
- Organisational trust is that which is placed on the honest intent and willingness to co-operate of other actors/users of the system
In other words, it is the difference between trust in impersonal objective systems and unpredictable subjective actors. As such, by definition, honest intent has no place in completely trustless systems.
Why TTPs just work
From the perspective of the majority today, TTPs makes life easier as it is effectively a transfer of reputational trust from that of an individual to that of an organization. This is the reason why terms such as Deloitte, EY, KPMG, and PWC rings a bell as these are the names tagged to such entities whose operations remain unknown to most. Essentially, we do not need to know the who or the how but rather that the output from such organizations, be it the big four or any other company, are valid. The important distinction is that these outputs are now being generated by what is effectively a black box with its own set of rules and systems. In effect, the reassurance that the TTP vouches for the other party enables us to establish indirect trust.
The existence of such TTPs are not inherently a bad thing but the trouble arises when the costs of friction to move to an alternative is too high or even if there is an alternative in the first place. This can be seen to a certain extent in the audit industry as many MNCs will require such reports to be prefaced with the logo of any of the big four (in many cases by law) else risk being doubted. Reputation is the lifeblood of auditing firms and no one will work harder than these companies themselves to ensure that their main assets are not devalued.
However, herein lies the potential for a conflict of interest as an industry which is made up of a few big players will want to shield their main players so as not to reduce trust in the overall system itself. This commentary does a good job of summarizing why the auditing industry should not be left to their own devices to ensure independent audits. It only takes a few bad apples in the company to ruin its reputation following which there are only two options:
- Fine and expose the bad actors who are effectively representatives of the industry. In the process, trust in the industry itself is dealt a severe blow. Even if such a route is chosen, lowering the number of players in an oligopoly will likely lead to more severe consequences down the line.
- Deal with the matter internally or push for a confidential settlement. In this case, anyone outside the industry is non-the-wiser and business goes on as usual.
It doesn’t take a degree to see which is a much more attractive option from the perspective of an oligopoly built on trust. The problem here is that even if the industry itself is genuine about their business, we have no way to tell given the self-referential nature of the industry (auditing firms audit each other after all). As always, the issue isn’t black or white but rather multiple shades of grey. What is clear though is that no industry should function based on reputational trust or honest intent alone. This is where decentralized technologies will have an important role to play by introducing technical trust as a check on the reputational trust which humans are more naturally inclined to.
The Uncanny Valley of Trust
For those familiar with the uncanny valley phenomenon, it is a theory which is starting to get more attention with the rise of humanoid robots. It is the feeling of discomfort that comes about when a non-human object very closely approximates real human beings but falls short of completely tricking the human brain. In a similar vein, I believe trusted systems will face such feelings of discomfort with the rise of decentralized technologies which are starting to experience proven use cases but are not quite there yet.
For many of us blockchain enthusiasts, it is easier for us to adopt such technologies as there is both a strong push from the repulsion of corrupt institutions and to a lesser extent, a pull from a relatively informed belief in the implications of such technologies. However, for the large majority of the world’s population, the jump from reputational trust to technical trust is going to be a difficult one given that current TTPs are doing a sufficient job greasing the gears of daily life. This is even more so considering that such a move would mean that there is no single party that is responsible for on-chain operations if anything does go wrong. Such abstract technologies requires its users to secede control over to an amorphous faceless black-box entity and these are exactly the two things which humans hate the most: fear of the unknown and losing control.
This paradigm shift is something unprecedented in human history and considering that certain swathes of the population still distrust their computers, human resistance to such abstract changes will be even harder to overcome. As with all new innovations, mainstream adoption will be dependent on the early adopters recommending proven products to the technologically less-inclined which by definition always forms the majority. What this means is that prior to mainstream adoption of completely decentralized technologies, the tech itself will need to have a strong community base whose reputation is able to overcome the majority’s fear of change. And as hinted at above, the most likely way that this will happen is via current TTPs as they have an out-sized reputation.
“Trust takes years to build, seconds to break, and forever to repair”
It is also important to note that trust by itself is not a guarantee of future behaviour. There is a reason why the phrase above has withstood the test of time. One implication of the quote that is less thought about is what happens if a malicious party does not intend to repair this trust in the first place. Herein lies the problem with current TTP infrastructure as it relies on a “trust first, verify later” perspective whereas logically, trust should only be established following the completion of the event.
This is one big advantage which blockchain technology has over current infrastructure due to different consensus protocols that enables parties to conduct business without either one having to trust the other, be it directly or indirectly via TTPs. The mechanisms through which this is achieved is a whole other story but the possibility of such an architecture does raise the more important question of which use cases does the benefit of technical trust outweigh that of reputational/organisational trust.
By rational theory, such technology should be implemented wherever there is a possibility for malicious actors to abuse the trust provided to them for personal gain. However, as highlighted above, there are a few problems with this approach:
- Fear of the unknown: Reputational trust is a requirement for mainstream adoption as the majority will not and should not be expected to understand the inner workings of such systems;
- Conflict of interest: Save for an industry-crippling breach of trust, the interests of those with an out-sized reputation is in direct conflict with the benefits promised by decentralized technologies;
- Abstract benefits: Decentralized technologies do not necessarily have to outperform their centralized counterparts to be a better alternative. The problem is that the practical benefits of such technologies tend to be more abstract while centralized technologies will always theoretically outperform such technologies when it comes to critical quantifiable metrics such as transactions per second and confirmation time;
- Human fallibility: Blockchain is essentially a set of instructions written by humans and as such will also inherit our imperfections. Even though software is rapidly eating the world, decentralized blockchain technology deals with matters of finance and knowledge, two things which have heavy consequences once breached. As such, even perfect decentralized technology code will take time to establish itself as bulletproof.
- Enforcement of code: At the end of the day, even when such technologies do become the ‘single source of truth’, it is still essentially a digital record which individuals can choose to follow or disregard in the real world. Until code is able to enforce itself via real-world interfaces (think robots and IoT), enforcement of code is still dependent on TTPs.
It is important to note that even though trust is a subjective measure informed by reputation and/or experience, accountability is largely solvable through event logging. This leads to a vital observation when determining the practical benefits of distributed ledger technology:
Simply put, if we trust a party, it can simply maintain a database with all its records and report them upon request, without the need of a global and distributed ledger. The added benefit for the company is that it can further control access to its data, which is significantly harder to achieve when taking part in a global distributed system. — ABB Corporate Research
The Goldilock’s Principle
In theory, trustless systems are the ideal solution to many of the governance problems that arise when a network expands pass a certain size. The rules and protocols put in place should provide mathematical guarantees on the behavior of actors in the system as their best interests aligns with that of the system. Essentially, such technologies are asking us to place our trust in deterministic mathematical models and algorithms. This is logically sound but by definition, such models can never comprehensively model every aspect of reality. Nevertheless, it is still a step in the right direction as it provides an alternative to the current system where trust is overly dependent on reputation and honest intent alone.
As with most things in life, the optimal solution will likely be somewhere in-between the largely centralized systems we see today and the completely decentralized future envisioned by idealists. This Goldilock’s point will be highly context specific but will be largely driven by the extent to which the potential damages of a breach of trust outweighs the operational costs of adopting decentralized technologies. From a systems perspective, the larger the potential conflict of interest, the stronger should be the push for such technologies.
Those who are more discerning will be right to point out that such costs will be dependent on multiple other TTPs, chief among them being the government. This is where I would argue that even the utilization of private permissioned blockchain technology among a multi-party consortium will go a long way in addressing systemic corruption as casting the net just a little wider (n>2) significantly reduces the chances of a rogue actor. This would greatly increase not only security and efficiency of the system but more importantly, trust in the overall system itself.
For certain use cases such as store of value and personal data ownership, public blockchain technology does seem like the right way forward due to the guarantees it provides around self-determination. However, as mentioned above, the path to completely decentralized systems will be paved with TTPs adopting such technologies first due to the out-sized reputation which they have and human’s natural inclination towards reputational trust. The development of new technologies such as zero knowledge proofs, trusted execution environments, and code obfuscation will be exciting as it opens up alternative models where conflicts of interests can be further minimized. This is ultimately what we should strive for as that is what makes our systems that much more resilient to corruption/fraud.
Thanks for reading, this was just me trying to organize some of my thoughts around the trust debate. I would love to hear your thoughts so please do drop a comment :)