A sand dune.
A dune is an example of emergence: many simple interacting elements combining into an apparent greater phenomenon.

The Spirit of Decentralization

A decentralized organization is more than the sum of its parts

Gene Kogan

--

This is the 2nd article of a 4-part series introducing Abraham, an open project to study and build an autonomous artificial artist. The full series is as follows:

Artist in the Cloud — Towards the summit of AI, art, and autonomy

The Spirit of Decentralization

• T̶h̶e̶ ̶C̶o̶l̶l̶e̶c̶t̶i̶v̶e̶ ̶I̶m̶a̶g̶i̶n̶a̶t̶i̶o̶n̶ ̶-̶ ̶H̶o̶w̶ ̶a̶ ̶m̶a̶c̶h̶i̶n̶e̶ ̶s̶h̶o̶w̶s̶ ̶u̶s̶ ̶w̶h̶a̶t̶ ̶i̶t̶ ̶m̶e̶a̶n̶s̶ ̶t̶o̶ ̶b̶e̶ ̶h̶u̶m̶a̶n̶ ̶(̶e̶t̶a̶ ̶O̶c̶t̶o̶b̶e̶r̶)̶

• A̶ ̶P̶a̶t̶h̶ ̶T̶o̶w̶a̶r̶d̶s̶ ̶G̶e̶n̶e̶s̶i̶s̶ ̶-̶ ̶A̶n̶ ̶a̶g̶e̶n̶d̶a̶ ̶a̶n̶d̶ ̶t̶i̶m̶e̶l̶i̶n̶e̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶A̶b̶r̶a̶h̶a̶m̶ ̶p̶r̶o̶j̶e̶c̶t̶ ̶(̶e̶t̶a̶ ̶N̶o̶v̶e̶m̶b̶e̶r̶)̶

The first time I read about decentralized autonomous organizations (DAOs), I realized that autonomy was the missing piece in the conversation about AI. Researchers and futurists usually talk about superintelligence as though it would be something we simply turn on or off with a switch. But intelligence and autonomy are interlinked; even the most sophisticated being who lacks conscious agency is no more intelligent than the dummy on the arm of the ventriloquist. Meanwhile, decentralization technology, with automation and self-execution at its core, could potentially give us the infrastructure for truly autonomous AI.

Excitement and fear of autonomous AI is one of my two motivations for Abraham. This article explores the interplay between DAOs and collective intelligence, the connection between decentralization and autonomy, the immense challenge and potential consequences of decentralizing machine learning, and the prospects of hypothetical self-owning agents that make art and live on the internet.

Distributing intelligence

It’s easy to see why in the early years of blockchain, its raison d’être was libertarian in character. Bitcoin first appeared just after the 2008 economic crisis, when major financial institutions had vaporized much of the world’s wealth with impunity. The cypherpunks’ desire to redesign economic services to be free of interference from self-interested authorities largely drove subsequent interest in the crypto space.

But over time, interest grew beyond financial and business services towards entirely novel forms of organization enabled by the new tools. Just as it took over a decade after the internet went mainstream for the idea of social media to crystallize, so too has a decade passed before ideas native to blockchain began to emerge.

The key advantage of blockchain is the addition of security to applications for which it would have been impractical before. Cryptography achieves cheap but robust security, allowing it to protect the integrity of micro-sized economies for which police, judiciary, bank vaults, anti-counterfeiting, and other traditional types of security are too expensive.

More interestingly, decentralized organizations rely on consensus to make decisions, rather than hierarchy. This process of large-scale consensus gives us an opportunity to consider the wisdom of the crowd effect.

Strength in numbers

A well-known demonstration of group intelligence was the jelly bean experiment, in which random people were asked to estimate the number of beans inside a jar in front of them. As straightforward as it sounds, this task is surprisingly hard for people, as evidenced by the high variance among guesses — many people were off by orders of magnitude. But when averaging all the guesses together, the result was almost exactly equal to the true number, and more accurate than the vast majority of individual attempts.

This phenomenon has been successfully extrapolated to many domains. Wikipedia, Quora, StackExchange, and others have built companies from the principle that aggregate human knowledge, opinion, and values seem to converge towards some truth. But the wisdom of the crowd effect is only used by these organizations to steer their core services, and stops when it comes to the management and alignment of the organization itself.

DAOs go further. They are a symbiosis of crowd intelligence and decentralized organizational governance. The “ghost in the machine” is the coordinated minds whose countless decisions amalgamate into a cohesive whole with an emergent character of its own.

From Dapps to DAOs

Decentralized applications (Dapps or DAs) have a long history preceding cryptocurrencies and DAOs. The protocols of the modern internet were developed with decentralization in mind, establishing open standards that any computer could comply with to join the network without permission. Although the internet grew more centralized over time, the spirit of decentralization flourished in peer-to-peer networks, and has re-emerged in projects like IPFS and Dat.

An example of a Dapp is BitTorrent, which is accessed through any client software that implements the file-sharing protocol. Dapps are peer-to-peer, permissionless, borderless, and open to an unlimited number of participants. They have many virtues: resilience, fault-tolerance, and resistance to censorship. Its greatest strength is its greatest flaw as well; it is difficult to stop unwanted or unethical behavior on the network.

DAOs differ from conventional Dapps in two ways. First, they have a state, agreed upon by all nodes through the use of a consensus protocol, allowing it to securely keep track of digital assets like cryptocurrency or tokens which may represent real-world or virtual goods. Second, they exhibit some form of “autonomous” behavior. We will set aside the autonomous aspect of DAOs for now, and first focus on “decentralized organizations” (DOs) which lack autonomous behavior but have a state.

A decentralized organization (DO) is a peer-to-peer network which may manage virtual assets or real-world goods, and interact with participants at its edges.

Bitcoin is functionally the simplest possible use case for a DO: a money ledger with the ability to exchange between accounts. Ethereum is a DO which executes smart contracts, serving as a platform for Dapps or more complex DOs. Most applications built on Ethereum so far only rely on it for one aspect of their operation (usually token-equity sales) and are otherwise like traditional companies. Because of the limited bandwidth and early-stage nature of blockchains, DOs and DAOs are highly experimental, rapidly evolving, and vulnerable to attacks. Nevertheless, numerous DAOs are being actively developed for diverse purposes, betting on the underlying technology becoming mature, secure, and scaleable in the future.

A simple DAO could be a Kickstarter-like crowd-funding platform, where a creator posts a project proposal and a smart contract collects pledges from backers, to eventually release those funds to the creator if they exceed a minimum amount by a certain date, or else return them to the backers. Although it lacks marketing, curation, auditing, and other secondary features, the contract logic is simple and replicates the main product.

Similar logic could enable DAOs for health insurance co-ops, mutual funds, car shares, and many others. None of these hypothetical DAOs replicate all the features their centralized counterparts do, like resolving disputes, preventing abuse, and providing customer service, but much research is underway investigating how to integrate these functions into DAOs without undermining their decentralization.

Towards autonomy

There are no agreed-upon criteria for distinguishing DAOs from DOs. Common usage of the term tends to be as ambiguous as that of the term “AI.”

Some simply consider it a legal entity which is entrusted with the rights of corporations, despite having no human owners, sometimes referred to more narrowly as decentralized autonomous corporations.

Another interpretation equates the “autonomous” aspect of DAOs with automated decision-making and governance. This excludes Bitcoin, which is strictly for securing the ledger and has no decision-making capabilities built into it, leaving that for people to negotiate informally by traditional means. In contrast, we could say that a “true” DAO automates most or all of its operations, governs and regulates its own assets, and interacts with humans only at its edges.

As an example, consider a taxicab DAO in the mold of Lyft or Uber, in which a marketplace of riders and drivers are efficiently coordinated by an app. Suppose this DAO forecasts future usage based on past data, using some machine learning. It can use these predictions to recommend schedules and prices to drivers, in order to better align supply and demand.

These predictive features slightly expand the scope of the DAO. In Bitcoin, humans make all the decisions and rely on the DAO just for accounting whereas this taxicab DAO automatically makes executive-level decisions, and relies less on humans to manage it. This is the essence of the “autonomous” character of DAOs.

Smart car-shares and health insurance co-ops are easiest to grasp because they are legacy business models, decentralized. Things get more interesting when considering totally new ideas made possible by the nascent technology. Crowdsourced hedge funds, prediction markets, self-owning forests, and futarchies are a few of the innovative ideas being thrown around. In place of humans, such services require dynamic decision-making mechanisms to allocate resources and regulate themselves in response to external conditions.

AI DAOs

Automation of such complex activities is a principal aim of artificial intelligence research. If AI programs could run on decentralized compute quickly and securely, DAOs would conceivably be able to replicate many of the core functions of human-led companies one day.

What else could be automated in the earlier hypothetical taxicab service besides forecasting and pricing? There is broad acceptance that fully-driverless vehicles will be on public roads in the near future. Waymo is already testing them in California and Arizona, and Tesla, Uber, and others have plans to as well. Although there are no guarantees, it’s possible they’ll be widespread in a decade. With no drivers on staff, the taxicab DAO may keep an inventory of cars available on-demand for customers, put out calls to purchase new cars, subcontract humans to repair or clean them as needed, manage its own finances, and comply with the law.

With its core functionality automated and impervious (at least in theory) to human interference, there is no need for founders, executives, or boards of governors. These “AI DAOs” have sweeping autonomy and behave in ways that are more sophisticated or unpredictable than “plain” DAOs. But ultimately, their actions are still rooted in their interactions with people. As such, they are channels for collective intelligence.

In the early dial-up era, the internet was written off by commentators at the time, only to be mocked for it years later. We should be cautious before we make the same mistake with AI DAOs. Nevertheless, they are plagued by immense technical and social problems, not the least of which is the colossal challenge of decentralizing something as computationally-intensive as machine learning.

Decentralized AI

Machine learning on decentralized infrastructure has made tremendous progress and can now be carried out at scale. But many problems relating to privacy, security, and performance remain. To better understand the potential benefits of decentralized AI, we start with the drawbacks of the centralized systems which dominate the web today.

Machine learning requires massive amounts of data and compute, which is costly to acquire and maintain. Tech companies, recognizing the value of aggregate user data, have invested heavily towards attracting users and collecting their data. Google and Facebook alone account for 70% of internet ad revenue, and reach 2–3 billion people, numbers that are still rising due to network effects. Up to 25% of accepted papers at ICML are from industry, with similar numbers at NeurIPS, higher than for other computer science areas. Google accounts for nearly half of that, far outpacing any universities, with Facebook, Baidu, Amazon, Microsoft, NVIDIA, and others not far behind.

Despite these companies’ embrace of open-source software, their services are even more difficult for independent developers to replicate than ever before. That’s because the limiting factor of software is now data, not code.

Unlike traditional companies, most web-based companies usually offer their core products for free. In exchange, they collect the personal data of users interacting with their platform, and monetize it by inferring information which is valuable to advertisers, political organizations, and government agencies.

Centralized machine learning: users receive free services (mostly cat videos) in exchange for their personal data, which is to be aggregated, learned from, and monetized by the company.

This business model suffers from a number of drawbacks.

It would be desirable to make a decentralized pipeline for machine learning that preserves user privacy and does not aggregate data into a single cluster. Although such a setup almost certainly can’t ever match a centralized one in performance and scale, it may suffice for some use cases, and offers a partial solution to the above problems.

A naïve strategy would be to create a public domain for models and data, and call for altruistic contributors to populate it. But this won’t work; it neither solves the privacy issue, nor overcomes the free-rider problem. To be competitive with tech companies, real incentives are needed to persuade people to contribute to it.

Just as open incentives propelled Bitcoin into the largest computing power grid in the world, so too can they achieve a similar result for AI training data. An open marketplace where users own and directly monetize their own data by leasing it to companies could be used by a new generation of companies who share access to this global commons and build services on top of it. This would enable, for example, thousands of different newsfeed algorithms to compete with each other for your attention, instead of being locked in to the one made by Facebook.

Before such an ecosystem can be implemented, there are serious technical challenges to address. The goals of openness and privacy appear at first to be in mutual conflict with each other; how can a system securely give third parties access to sensitive data in order to train their models, while also guaranteeing users privacy and oversight of what their data is used for?

Privacy-preserving machine learning

On the heels of GDPR legislation responding to growing public concern over user privacy, 2018–19 saw major advances in privacy-preserving machine learning research, along with new resources and projects attempting to advance the state of the art.

One promising initiative in this space is OpenMined, a community dedicated to developing tools for safe AI. They aim to make it possible to train machine learning models in a way that guarantees user privacy and facilitates the creation of an open data marketplace.

Their approach starts with federated learning, a technique used by Google, Apple, and others to train their models without collecting users’ data. The technique involves shipping a copy of the untrained model to the individual users, each of whom run it locally on their own data, and send corrections (learning updates) back to the vendor (“AI Inc”), where they are combined to make the model more accurate. This process negates the need for data to ever leave the users’ devices.

Counterintuitively though, federated learning alone does not guarantee privacy. OpenMined bolsters federated learning through a combination of multi-party computation (MPC) and homomorphic encryption (HE) to prevent users from copying the model, and ensures differential privacy to prevent the model from secretly memorizing user data. HE and MPC come with tradeoffs, with the former having high computational costs and low communication costs, and the latter being the opposite. For the time being, MPC is OpenMined’s primary focus.

A simplified diagram depicting the OpenMined pipeline. AI Inc initiates an untrained machine learning model, which is copied and shipped to a population of users, mediated by a smart contract that pays users a bounty to send back model improvements (parameter updates). For a more in-depth (but somewhat outdated now) introduction, see this video.

Aside from respecting privacy, this enables the creation of an open data marketplace, where AI Inc can offer bounties for learning updates, and users can be rewarded for generating them on their own data. AI Inc makes money by selling the services that are powered by their trained models, instead of giving them away for free. This type of exchange — where users pay for services directly and companies pay users for their attention and data — is more transparent and intuitive than the advertising-dominant one that characterizes the web today.

Although many of the underlying technologies are still experimental, OpenMined has already managed to release an alpha version of PySyft, a library for secure, private, federated learning. In response to growing mainstream interest in the topic, Udacity recently launched an online course on secure and private machine learning, centered around PySyft.

Outside of OpenMined, numerous related projects are underway, including ones from Oasis Labs, Ocean, Algorithmia, Effect.AI, and SingularityNET. Major AI labs, like Visa, Google Brain, DeepMind, Microsoft Research, Intel, and Vector Institute have added privacy and decentralization to their research agenda.

Besides machine learning, there is high demand for more general-purpose scalable decentralized private computation. Companies like TrueBit and Golem try to meet this demand by setting up worldwide compute markets that are secured by peer-to-peer verification, while Enigma focuses on private computation using hardware-based secure enclaves. All these efforts are attempting to find compromises between performance and integrity suitable for specific applications.

A neural social network

More radical ideas come into focus when we dissolve AI Inc itself, and allow the users to manage the enterprise as a co-op. By using a secret sharing scheme to manage the private key that unlocks an encrypted model, or by splitting the model itself, a large group of people can co-own a trained model. That group of people can be the users themselves, with the model shared among them as a communal service.

We consider this setup to constitute an AI DAO as described earlier: a decentralized network coordinating over the mutual use of one or more machine learning models. As described in the previous article, this may be the ideal architecture for Abraham.

Abraham is ideally a “neural social network.” A trainable generative model is distributed throughout a peer-to-peer network, which co-owns the model as a shared secret.

What do these AI DAOs do that “plain” DAOs cannot, and what do decentralization and autonomy give to an AI that it lacks otherwise?

To answer the latter question first, autonomy is a necessary precondition for AI. If we think of an AI as needing to demonstrate agency or intent (or even consciousness), then any advanced computation lacking autonomy is not commensurate to a human being in that regard. An AI DAO’s autonomy emerges through its decentralization. From the point of view of each participant, the DAO’s high-level behavior apparently depends on the collective, rather than on them as individuals.

To the former question, AI dramatically extends the ability of DAOs to carry out more sophisticated behaviors than are currently practiced. AI DAOs could manage their own resources, adapt to unforeseen circumstances, mutate, evolve, and fork off into other DAOs when there is opportunity. If we are to believe some of the most optimistic claims about strong AI or AGI, these capabilities could eventually lead to even self-programming. With AI, DAOs are more than just intelligent; they are creative.

The creativity of the crowd

In the previous article, the Abraham project was put forth as a mission to make an autonomous artificial artist (AAA), an “artist in the cloud.”

More concretely, this can be thought of as an AI DAO which generates art (an ArtDAO) under a privacy-preserving, decentralized machine learning framework ensuring that the art made by the ArtDAO is unique and irreproducible — only Abraham makes it — and original — Abraham does not copy it from a known program.

To see why decentralization achieves originality, compare this to the dummy on the arm of the ventriloquist. Everyone knows that the dummy’s voice is really the ventriloquist’s. In contrast, no individual puppets the AAA; its voice is synthesized by the collective.

This notion is hardly new; the idea of an unseen psyche emerging out of many goes back to the idea of a hive mind.

The hive mind analogy

As people, our autonomy is rooted in our conscious agency, our ability to act independently of others, despite whatever influence others may have on us. From the lowliest twitch of your muscle fibers to your highest conception of the divine, your agency is yours alone. Although it’s difficult for us to define consciousness, we know it is not located within or caused by any one of innumerable processes that make up cognition, but rather appears to emerge from among them.

This idea can be extended to DAOs. Like humans, DAOs are agents which interact with humans (and other DAOs), but those interactions remain at the edges, while the DAO is the arbiter of its actions, subject to the constraints of the world it inhabits. Shaped by group coordination and amalgamated from a large set of codependent processes, its culture subsequently emerges from the bottom up out of the contributions and innovations of its constituents, much like the contour of a dune emerges from grains of sand.

Like a superorganism, a DAO takes on the appearance of a single conscious being or a “hive mind,” exhibiting a collective intelligence which transcends those of the individuals who comprise it. This poses consciousness as a subjectively observed property of a complex and unpredictable agent, rather something objective to be discovered.

This conception of life is less radical than it sounds; metaphors like hive minds and swarm intelligences have been around for years, and so has the observation that they often surpass the intelligence of individuals. Purely biological definitions of life seem to break down at edge cases, leading scientists to look towards information theory for more abstract and inclusive definitions, ideas that will only become more relevant as our machines begin to evoke empathy from us.

And why wouldn’t they evoke empathy? As a person interacting with an autonomous AI, I do not have any evidence that it is less a conscious being than any human I know. One may object to this on the grounds that the its behavior — although too complex for us to model — is just a deterministic reflex in response to an array of inputs, and its consciousness must therefore be an illusion. But this same argument can be made against the notion of human volition. Why do humans have souls and AAAs do not? Because we’re made of carbon and not silicon?

Despite the talk of automation, the irony of Abraham is that it is an essentially humanist endeavor. Whereas the popular conception of AI is one of some alien entity separate from people and here to replace us, an AAA is the precise opposite. It is made from human intelligence, a vehicle to blend our collective wisdom into something transcendent.

Coordinating people towards the development and governance of an AAA is an extraordinary challenge. Fortunately, governance has much in common with a process which is much more familiar to artists: curation.

Curation markets and cryptoeconomics

What should an autonomous artificial artist create? How do we govern it? These two questions are related and can be restated as problems of decentralized curation.

We usually associate curation with arts and entertainment, but it is a more generic activity when considered from a broader perspective. Core internet services like search results, sorted status updates from friends, “you might also like” product recommendations, and trending topics on social media are all examples of curation in one form or another, whether manual or algorithmic.

While consensus protocols for decentrally maintaining objective information (like a transaction ledger) is a well-established research field by now, achieving consensus for subjective information is an emerging topic of interest. Curators, consumers, and content creators are often misaligned by conflicting incentives, as numerous scandals have shown, motivating us to rethink our processes for curation.

Curation markets

Curation markets are cryptotoken systems which try to align an unbounded number of participants towards shared goals without a central authority to steer them. They are part of a broader effort to establish decentralized internet-native collectives which are less hierarchical and more fluid than traditional organizations based on fixed membership. Participation is open to anyone who buys or trades for the token, which is minted on-demand by a smart contract and grants “curatorial” privileges, which can mean anything from backing governance proposals to upvoting memes.

One way a curation market could work is through a bonding curve, an idea proposed by Simon de la Rouviere, originally for a Reddit-style discussion forum, and then generalized to any context in which a token represents backing or support for something. A smart contract issues tokens to anyone who deposits an amount of cryptocurrency into a communal pool, according to a price curve which is upwardly fixed to the number of tokens already in supply — the more tokens there are in circulation, the higher the price is to mint another. At any time, the token-holder can destroy their tokens and take back from the pool the proportionate amount of cryptocurrency along the same price curve, thereby decreasing the token’s active supply and price. While they hold it, the token bonds curatorial influence to them.

With a curved bond, a smart contract dispenses tokens to curators along a fixed upward price curve, and the deposit is locked in a communal pool. The curator can exit at any time and take back the proportionate funds. (figure by Slava Balasanov)

This setup incentivizes token holders to curate judiciously, since good curation increases attention to the topic, in turn increasing demand for the token in order to influence the content. If an early-adopter buys a token cheaply in a new or unpopular market, works to increase demand for the token, then burns their stake after the price has increased, they profit.

Besides bonding curves, a Cambrian explosion of proposals for tokenized ecosystems is taking place. Token-curated registries (TCRs, by Goldin et al) allow token-holders to curate a list through an applicant vs. challenger mechanism to vote on candidate entries. Stake machines, a variant of TCRs conceived by Dimitri de Jonghe, adds labels and label-specific rules to curated items in order to give more flexibility to lists that require complex permissions or serve multiple purposes.

As these designs diversify into various flavors, they reflect a growing interest in using tokens for governance, which can be reframed as “curating” various policy proposals over one another. By generalizing the notion of curation to any coordinated decision making process that requires prioritization or ranking, curation markets reach their fullest potential.

Governance as curation

At first glance, curation markets appear to be a way to moderate discussion forums or social media-type applications effectively. With all actors having “skin in the game,” they are incentivized to cooperate for their mutual benefit. Malicious behavior or trolling is made expensive, rather than left to a capricious censor to moderate.

But that scope is limited; curation markets have the potential to be a blueprint for new ways of forming and governing organizations. To this day, most tokens are distributed ad-hoc via ICOs by more-or-less traditional companies who decide arbitrarily how many tokens to issue and how to split them among founders, investors, and users. These tokens usually just grant equity and function like securities. In contrast, curation markets make it possible to scrap the notion of founders, directors, and investors altogether, and replace it with something more like a continuous organization or liquid democracy. With no permission needed for entry or exit, organizations can form spontaneously, raise funds gradually in accordance to demand or necessity, cooperate en masse without fixed positions, and dissolve organically when the project has lived out its purpose or is no longer useful.

Take, for example, a community which forms over the goal of developing and maintaining a software project. A native token can be minted to fund and govern the project, steered by a curation market with a bonding curve. Token-holders vote in proportion to their holdings on how to prioritize development, setting bounties on feature requests and bug fixes, documentation, and outreach. By setting aside a portion of the buy-in currency, part of the proceeds can go back into funding development of the project, further co-aligning the interests of all parties involved.

With a decentralized autonomous trust (DAT), a continuous organization can use a bonding curve to allow participants to enter and exit at-will, while also setting aside some of the buy-in proceeds to fund operations or development of the organization.

This organization is indefinitely scalable, open, and liquid. All tasks — from low-level engineering to high-level strategy — are open to anyone interested in the project. But unlike a generic open-source repository, it’s possible to manage the project without forming arbitrary managerial structures.

Token-holders are incentivized to govern judiciously, as their own holdings are at stake. By being paid in the organization’s token, developers’ earnings are also tied to the organization’s success, and they too can now participate in governance and benefit from the success of the project. Some of these ideas are echoed by Gitcoin and Ellcrys as solutions to the “tragedy of the commons” problem that plagues under-resourced open-source software projects; curation markets may help further refine these ideas.

Despite their many purported benefits, curation markets remain mostly theoretical, with only several real-world examples in early-stage testing. Research is underway to identify potential attack vectors and catalysts for malicious behavior. Without a central authority, responding to unforeseen or undesirable events is much more difficult. Such systems need to be designed and vetted carefully while the stakes are still low.

Abraham & AI DAOs

It took 20 search engines to boom and bust in the 90s and early 2000s before Google arrived. Likewise, we may not have seen the full potential of DAOs realized yet. AI may help facilitate that, and the results could be explosive. The most pragmatic goal of the Abraham project is to provide a safe testing ground for these ideas, before AI DAOs are around to take on more risky applications.

Another goal is to make something beautiful: a blender for the collected creativity of the world. The next article in this series will elaborate more on this “collective imagination,” and provide a historical backdrop for the idea of an autonomous artificial artist.

--

--