A new model — the Civic Trust — may help protect the public’s interest as civic tech evolves.
Trust in institutions, globally, is at an all-time low. Government, business, journalism, and even non-profits, are all losing the public’s faith. The US Government, in particular, has hovered near its lowest approval ratings in history for an uncomfortably long time. And, given recent history, it’s hard to blame us.
At the same time, there’s an unprecedented amount of experimentation in building engagement in collective action, largely through technology. In the last few years, we’ve seen an explosion of new approaches to a range of organizing challenges — from engaging the crowd (sourcing, funding, investing); to creating open structures (data, source, organizations); to changing the definition of “good” business (impact investing, b corps, social entrepreneurship); to sector specific approaches (civic technology, ICT4 development, edutech); among many others. Each of these is an effort to expand or change the way that traditional organizations make, fund, and execute decisions. And yet, wealth inequality (and in the US in particular) is growing rapidly, and technology has yet to deliver on building more democratic systems. Why?
One reason is that we’re using the same organizational models we’ve used for decades. The way that we legally structure organizations (Corporations, 501(c)3s, Limited Liability Companies, etc.) defines their incentives, values, decision-making structures, and priorities. Incorporation models are the DNA of organizations — even those with revolutionary approaches to collective action — and that DNA replicates the same structural flaws and risks as traditional organizations. We are trying to build the future with the same organizational structures that gave us the present and — as Einstein said, “We cannot solve our problems with the same thinking we used to create them.” Public decision making structures are, collectively, how we think.
Amidst the global fallout in trust, there are many trying to build technological solutions to our trust deficit — both better verification and “trustless” systems. The most recent (and, arguably, credible) technology solution is the blockchain — the distributed administration and ledger architectures made famous by Bitcoin. The blockchain offers a huge amount of potential — in particular, it decentralizes administration (and administration costs), improves transparency (for those capable of understanding it), and increases the reliability of complex interactions. As a way to contextualize blockchain’s potential impact on the ecosystem and evolution of technology adoption, Nick Grossman’s “Venture capital vs. community capital,” is a great overview.
However, as Rachel O’Dwyer’s “The Revolution Will (not) Be Decentralized: Blockchains” points out — the primary challenges in collective action, while interesting technologically, still come down to mediating relationships, managing governance structures, and being able to set common standards. Blockchains create huge opportunities to transparently design and manage distributed processes, but they don’t enable us to evolve norms, resolve disputes arising from the transactions they administer, or meaningfully solve the definitional issues that lie at the core of our representative governance models. To take the inverse of the “law cannot solve technology’s problems,” trope — we can’t expect technology to solve our organizational problems.
One of our most challenging organizational problems is how to balance the financial needs of information channels and the integrity of the information and relationships the channels represent. Publishing platforms make it even more complicated — they use their content and engagement structures to monetize our behavior and relationships. The majority of privately owned platforms — even well intentioned ones — sell some form of access to users, data, services, and servants — all of which distorts the integrity of the underlying relationships. As respected, anonymous cybersecurity expert @SwiftonSecurity tweeted recently:
And that’s where it gets really concerning — the more civic groups and governments rely on commercial technologies as intermediaries, the more potential there is to strain or distort the already struggling trust relationships between the public and institutions. Facebook has already shown that it’s willing to manipulate hundreds of thousands of people without consent — and according to Quinn Norton, they’re far from the only ones. Even where they’ve resulted in positive outcomes, like the UK’s Behavioral Insights Unit, surely there should be informed consent? Even better, of course, would be an embedded role for the people involved to have a role in its decision-making — helping offset concerns created by the fact that it’s now a private company.
As Erica R.H. Fuchs noted in a research paper about DARPA, institutions that develop technology require embedded network governance, meaning that the network should have an built-in role in making decisions. According to Fuchs, that means more than just technology — it includes bridging the policy, business, and legal implications of technology. That, as Stanford’s Lucy Bernholz notes, means not only changing what organizations do, it means changing how they work.
So. How do we build trust in organizations? More importantly, how do we make organizations more trustworthy? I’m a big believer in progress through open, participatory processes. Still, there are big, difficult questions that will require a lot of experimentation to answer. Finding socially conscious ways to experiment will be incredibly important, especially in an intellectual property climate that allows ownership of the ways that we communicate. Technology companies make an incredible amount of socially important decisions about power and identity, among many other things. Those are the types of decisions we used to make with public processes — something which Mark Zuckerberg completely misses in his defense of Free Basics in India. While a number of national and regional governments are making great strides, we need a better model for people to meaningfully participate in the power and identity decisions made in technology companies.
Like most forms of both collective action and technology, we’ll probably end up with more than one answer. One approach to building those public participation spaces may be a new approach to an old structure: the Civic Trust*. Trusts are privately created legal agreements that create systems of management and governance over a particular set of assets for the benefit of a group. Trusts also create a fiduciary duty to a beneficiary group, like the public, who can then hold trustees accountable in court (for a broad overview of Trusts, this article from Findlaw is helpful). Civic Trusts are different from traditional trusts in one, key way — they build models for participation into the governance of the trust itself.
The use of trusts to protect public resources or companies isn’t new. Natural resources are often donated to governments through Public Trusts, which can set standards around the maintenance and care of that resource. Many trusts are created to ensure the integrity and sustainability of institutions — in his recent farewell to readers, the Guardian’s Alan Rusbridger cited how important the Scott Family Trust has been in helping maintain the newspaper’s independence. There is already discussion of public data trusts, explained well by Keith Porcaro. Even venture capitalists form trusts. But, in their present form, trusts — like almost every other incorporation model — centralize decision-making in small groups that can have a big influence on how we realize our values.
That’s where a Civic Trust are different. Civic Trusts can embed public participation — network governance — into the way that technology companies and their products evolve. As a number of popular social platforms prove, our communication patterns, attention, and network maps are valuable resources — and how they’re governed matters.
Here’s how it works: An organization creates a Civic Trust to protect the integrity of its product and community by building structures for the public participation in decisions that effect the users. The Civic Trust would create an independent organization that owns the code and data resources created by the company (or individual), using limited, revocable licenses to give for-profits, non-profits, and governments the right to use, adapt, and sell products based on the underlying code. These licenses would give the trustees the ability to audit and/or investigate the use of public participation and consultation processes about issues that affect civil rights.
The Civic Trustee would help ensure the public had a meaningful voice in social justice issues like encryption, authentic identity policies, data security defaults, dispute resolution processes, data licensing, etc. As opposed to focusing on the values of the outcomes, Civic Trusts would focus on ensuring the integrity of the embedded decision making processes.
The biggest concern people express is why companies would give up intellectual property ownership. Apple and Google, and many other technology companies, already assign their intellectual property and data into shell companies located in places with no tax. They then use licensing to give the necessary rights to their subsidiaries and partners. This minimizes the tax they pay on the things that generate the most income — so Apple has $181 billion offshore with a tax burden of $59 billion, if repatriated. They’re able to run the world’s most successful companies using licenses, rather than ownership, to manage core intellectual property.
Civic Trusts take the same approach to owning and licensing intellectual property, they just add a layer of public governance, hardcoding a public advocate into the organizational DNA of the companies and technologies that connect us. Civic Trusts would help us define embedded network governance in the public interest.
My next post will dive into the nitty gritty of a Civic Trust, including sample contractual provisions and ideas. At its core, though, Civic Trusts are a recognition of the importance of ingraining participation in public interest technologies. Ultimately, we have yet to figure out how to design organizations that can balance the public interest with the expectations and requirements of their funders— as Paul Klein notes, even non-profits struggle.
While I’m sure that there are many who disagree, that disagreement is exactly why we need Civic Trusts. We don’t have to agree, but we need the space to experiment and disagree in public. After all, as long as we, the public, invest in privately owned platforms — with our time, data, money, or anything else of value — we should have a meaningful voice about how they treat us, grow, and change. Otherwise, who would trust them?