Beyond the BORG: How to Build Real DAOs

Redbeard
Icewater
Published in
6 min readJun 18, 2023

--

A leading thinker in the realm of DAOs defined them this way:

“DAO” in its purest form refers to an unincorporated association of persons (an ‘organization’) utilizing censorship-resistant technologies to permissionlessly (‘autonomously’) engage in non-hierarchical, widely distributed (‘decentralized’) governance of shared resources and goals.

One challenge with this definition is that it excludes a lot of organizations that call themselves DAOs. One way to address this is to form organizations that are not pure DAOs. For example, a “BORG” is one option that merges autonomous technologies with traditional legal entities. A BORG is defined as:

The Cybernetic Organization (CybOrg or ‘BORG’), is a traditional legal entity that uses autonomous technologies (such as smart contracts and AI) to augment the entity’s governance and activities.

Fleshing out the concepts of BORGS and other DAO-adjacent entities is certainly good for the crypto industry. But its important not to forget the challenge of building actual DAOs. One reason we don’t see a lot of real DAOs is that there are unsolved engineering problems. It’s not just that humans aren’t pure enough. The technology isn’t ready.

Two Governance Problems

Before we can understand the engineering problems, though, we have to understand the human coordination problem. In other words, we need to understand the “governance of shared resources and goals” that DAOs are supposed to be engaged in.

All organizations fundamentally face two problems: One, how to gather resources together and two, how to deploy those gathered resources in a way that achieves the goals of the organization. Neither one of these things is easy. Let’s call the first challenge the “collective action problem” and the second challenge the “principal-agent problem”.

The Collective Action Problem

There are many circumstances where people can achieve more by working together than they can working individually. But working together is not easy because people often have incentives to cheat . The famous prisoner’s dilemma is an example:

an example of the prisoners dilemma from wikpedia

Current crypto technology represents a huge advance in solving collective action problems. The bottom line is that it allows people to pre-commit themselves to certain actions, where the commitment depends on the commitment or follow-through of other parties.

One well known solution that can be easily implemented using smart contracts is called an assurance contract or provision point mechanism. In this kind of contract, people pledge resources toward a project. But those resources are only used if they are able to get enough contributors to achieve the scale necessary to successfully complete the project. If the target isn’t reached, everyone gets their money back.

The Principal-Agent Problem

Unfortunately, the principal-agent problem is a bit more complicated. The problem can be defined this way:

The principal–agent problem refers to the conflict in interests and priorities that arises when one person or entity (the “agent”) takes actions on behalf of another person or entity (the “principal”).

The principal-agent problem is one of the main reasons we want decentralized organizations in the first place. When you centralized power, the people who obtain control of the shared resources can use that control to act in their own interest including, for example, trying to consolidate power. Think of an elected leader who uses their position to establish lasting control of previously democratic institutions.

In theory, smart contracts offer a tantalizing solution to the principal-agent problem: what if the “agent” is code that is executed using a trustless execution process? The code won’t have any interests that conflict with the principals.

For example, consider a group of a hundred people that together form 60/100 multisig to engage in some common enterprise. They could operate by staging code that is only executed if they get enough votes. There is little risk that the code will try to subvert the interests of the voters.

“Smart Contracts” aren’t Smart Enough

So far so good. Crypto technology helps overcome the collective action problem by allowing people to make binding commitments, and it allows us to overcome the principal-agent problem by enabling us to form self-executing agents.

But here’s the rub. The “agents” we have been able to create using smart contracts have been childishly simplistic compared to an actual human agent. You can give human agents very complex tasks like: here’s $1 billion, go build me a rocket that can get me to the moon. Try doing that with a smart contract.

Since human agents are more powerful agents than smart contract agents, sophisticated organizations with complex needs and goals will usually choose human agents.

What about Artificial Intelligence?

Given recent advances in AI, the obvious next question is whether things will change when computers start to become on par with humans in terms of their ability as agents?

I believe the answer is yes. At some point, robotic agents (including smart contracts) could potentially match or exceed the abilities of human agents. Of course, as AI doomers will remind you, when an AI becomes as powerful as a human agent, it may also develop conflicting interests. AI researcher Nick Bostrom gave a famous example where an AI programmed to make paperclips might decide to eliminate humanity to better achieve it’s goal.

So the problem with self-executing agents is that the more powerful the agent, the less trustworthy they are. This is the fundamental engineering problem for DAOs. Can you create a self-executing agent that is both powerful and trustworthy?

A first pass at this problem says that the answer is no. It seems impossible to “trust” a complex system using an evaluation method that is less complex than the system itself. But if you have a more complex system available for evaluation, wouldn’t it be a more powerful agent than the one you are trying to evaluate?

For this reason we can take it as a baseline that the most powerful agents will always be untrustworthy. However, this doesn’t mean that DAOs can’t become useful for many things. Organizations don’t always need the most powerful agent imaginable to achieve their goals.

For many purposes, there may be a sweet spot where autonomous agents are sufficiently complex to achieve the needs of an organization, but sufficiently simple to be independently verifiable using decentralized methods.

Asymmetric Verification

At the very heart of cryptography lies the simple fact that some problems are easier to verify than they are to solve. The canonical example of this is prime factorization. For example, it is easier to verify that 11 and 17 are indeed the factors of the number 187 that it is to find the factors to begin with.

This concept may help with the principle-agent problem because in some cases it may be easier to verify whether a course of action proposed by an agent is acceptable than it is to generate the course of action. So, a DAO could operate where one or more powerful AI’s compete to propose code bits to be executed by a multisig using group resources. But each bit of code must be verified and approved in real time by other less powerful AI’s.

The reason we want the verifiers to be less powerful than the generator is that it helps with decentralization. If the verifiers need to be as powerful as the generators it may not be possible for anyone other than whales to participate effectively in the verification process. One way to enforce this could be to limit the size of the code snippets that are evaluated at any given step. The concept is similar to the block-size debate in the Bitcoin ecosystem.

Conclusion

Ultimately, the fundamental reason that most DAOs are either 1) not real DAOs or 2) not very effective is that human agents are more powerful than the autonomous code bits that serve as the agents of current DAOs.

This limitation can be overcome with the advent of AI, but the more powerful the agent, the bigger the principal-agent problem becomes. One pathway to addressing this fundamental challenge is to create a system of real time agent-verifier interaction that allows the use of powerful AI agents using decentralized verification.

--

--

Redbeard
Icewater

Patent Attorney, Crypto Enthusiast, Father of two daughters