“Running Code and Rough Consensus” for AI: Polycentric Governance in the Algorithmic Age

Adam Thierer
14 min readSep 1, 2022

--

In my previous essay, I highlighted two general approaches to technological governance that can be used to address challenges associated with artificial intelligence (AI) and computational sciences more generally. I specifically identified the difference between governance “on the ground” (bottom-up, informal “soft law”) versus “on the books” (top-down, formal “hard law”). I discussed the difference between these two approaches and stressed why the more flexible “on the ground” governance tools and methods are usually superior because they treat innovation as innocent until proven guilty. Moreover, those more decentralized governance approaches are quicker to respond to pressing needs, while hard law initiatives are often too slow, clumsy, or costly.

Still, for some critics, only hard law will suffice, and they insist that we must go all-in on broad-based, top-down regulatory schemes for AI. That’s a shame because the more decentralized governance efforts have a lot going for them and are already doing a lot of good. Indeed, although many AI critics are prone to pithily suggest that “we need to have a conversation” about AI ethics, the reality is that there is no shortage of such conversations happening already today.

The “Rise of AI Ethics Everywhere”

We are witnessing “the rise of AI ethics everywhere,” as Stanford University’s Artificial Intelligence Index Report 2022 noted. There’s been an explosive growth of ethical frameworks and guidelines for AI that throughout academia and industry. Others have also noted an “avalanche of​ initiatives and policy documents” around AI ethics is now being closely studied by academic researchers, who aim to analyze and classify the resulting ethical recommendations.

A 2021 report from a team of researchers at Arizona State University identified an astonishing 634 soft law AI programs that were formulated between 2016–2019. Previously, I also highlighted how countless companies and major trade associations have already formulated governance frameworks and ethical guidelines for AI development and use. All of these efforts are aimed at addressing the so-called AI “alignment problem” of bringing algorithmic systems in line with important human needs and values, such as safety, security, privacy, fairness, non-discrimination, and more.

In this essay, I want to discuss the important role that professional bodies and associations play in helping to address the alignment problem. I will also outline the efforts being undertaken by many university-based centers and non-profit organizations, which are all closely studying AI issues and coming up with excellent ethical frameworks to embed “ethics by design” and help developers address the AI alignment issue. But I will also identify how greater coordination among these groups and efforts may be needed in the future if we hope to make more concrete progress.

Translating Ideals into Action Through Standards

One of the best ways to “bake in” ethical principles on a more widespread basis lies in the crucial work done by professional organizations and standards bodies such as the Association of Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), the International Organization for Standardization (ISO), and UL (which was previously known as Underwriters Laboratories). Such organizations serve as independent standards-creation bodies and help hold innovators accountable by designing guidelines and best practices established through soft-law processes. Industry trade associations, such as the Consumer Technology Association, also develop industry-wide standards for artificial intelligence technologies.

Center for Data Innovation analyst Hodan Omaar explains the importance of this emerging system of standards for modern technologies:

The U.S. approach to standards development for AI follows the general U.S. standards system, which has been exceptionally successful in generating technological innovation in the United States. The U.S. standards system focuses on voluntary consensus standards that are created by private sector standards development organizations in response to particular needs or issues identified by industry stakeholders, government, or consumers.

This effective model continues to evolve rapidly to address various algorithmic issues and risks. The UL, for example, has produced many different standards in the area of artificial intelligence, including its ANSI/UL 4600, “Standard for Safety for the Evaluation of Autonomous Products.” Similarly, in the UK, the British Standards Institute published a “Guide to the Ethical Design and Application of Robots and Robotic Systems” in 2016. Developed by a committee of scientists, academics, ethicists, and philosophers, the guide “recognizes that potential ethical hazards arise from the growing number of robots and autonomous systems being used in everyday life” and aims to “eliminate or reduce the risks associated with these ethical hazards to an acceptable level.” Specifically, protective measures create best practices for the safe design and use of robotic applications in a wide range of fields, from industrial services to personal care to medical services.

The work of the ISO, IEEE, and ACM deserves greater attention because these three organizations have labored to create detailed international standards for AI and ML development. These organizations possess enormous sway in professional circles as almost all the world’s leading technology companies and their employees have some sort of membership in these professional organizations, or at least work closely with them to create international standards in various technology fields:

International Organization for Standardization

The International Organization for Standardization is one of the oldest global standard-making bodies. Formed in 1946, the ISO “is an independent, non-governmental international organization with a membership of 163 national standards bodies” that seeks to build global consensus through multistakeholder efforts. Through this work, the ISO plays an important role in establishing international norms for emerging technologies.

The ISO uses dozens of technical committees that include global experts in diverse fields, such as industry, consumer associations, academia, nongovernmental organizations, and governments. It has already played an important role in formulating global best practices for robotics and AI-based applications. In 2014, for example, the ISO crafted requirements and guidelines “for the inherently safe design, protective measures, and information for use of personal care robots.” That standard is just one of dozens of robotics-related ones that ISO has published.

ISO also has a suite of standards governing a wide variety of AI, including a particularly detailed set of guidelines for AI risk management. ISO has also issued other guidance standards for information data security that are relevant to AI systems development.

IEEE

With more than 420,000 members in more than 160 countries, IEEE boasts of being “the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.” Over that past several years, the IEEE worked to finalize a massive Ethically Aligned Design project is an effort to craft “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.”

IEEE’s new effort seeks to incorporate into AI design five key principles that involve the protection of human rights, better well-being metrics, designer accountability, systems transparency, and efforts to minimize misuse of these technologies. The second iteration of the group’s report was 263 pages long and contained a suite of standards to satisfy each of those objectives. IEEE also continues to oversee an Organizational Governance of Artificial Intelligence Working Group to formulate standards and best practices for the development or use of artificial intelligence within global organizations.

Association of Computing Machinery

The Association of Computing Machinery developed a Code of Ethics and Professional Conduct in the early 1970s, refined it in the early 1990s, and then updated it again just recently in 2018. Each iteration of the ACM Code reflected ongoing technological developments from the mainframe era to the PC and internet revolution and on through today’s machine learning and AI era.

The latest version of the ACM Code “affirms an obligation of computing professionals, both individually and collectively, to use their skills for the benefit of society, its members, and the environment surrounding them,” and insists that computing professionals “should consider whether the results of their efforts will respect diversity, will be used in socially responsible ways, will meet social needs, and will be broadly accessible.” The Code also stresses how “An essential aim of computing professionals is to minimize negative consequences of computing, including threats to health, safety, personal security, and privacy. When the interests of multiple groups conflict, the needs of those less advantaged should be given increased attention and priority.”

Other Notable University-Led Efforts and NGO Efforts

Many other academic institutions and international organizations play an important “watchdog” role by both formulating AI ethical development guidelines and also holding private developers to account for the commitments they have make through various soft law frameworks. Some of the more notable efforts include:

  • The Markkula Center for Applied Ethics at Santa Clara University produces “An Ethical Toolkit for Engineering/Design Practice,” with a 7-step process for tech developers to follow when considering how to mitigate risks associated with new products. The Markkula Center also partnered with the World Economic Forum and Deloitte to produce a white paper on how to ensure “Ethics by Design” in technology development and use.
  • To focus on ethical AI in the fintech sector, experts at The Wharton School at The University of Pennsylvania created an Artificial Intelligence/Machine Learning Risk & Security Working Group, “to promote, educate, and advance AI/ML governance for the financial services industry by focusing on risk identification, categorization, and mitigation.”
  • The Partnership on AI began as an industry-led effort formed by Apple, Amazon, Google, Facebook, IBM, and Microsoft, but it has grown to include more than 100 members, including the ACLU and Human Rights Watch. The Partnership is billed as a multistakeholder organization that brings those diverse groups together “to study and formulate best practices on AI, to advance the public’s understanding of AI, and to provide a platform for open collaboration between all those involved in, and affected by, the development and deployment of AI technologies.”
  • OpenAI is a nonprofit research organization created in 2015 with seed money from notable tech innovators and investors like Elon Musk of Tesla, Sam Altman of Y Combinator, venture capitalist Peter Thiel, Reid Hoffman of LinkedIn, and others. OpenAI publishes research reports discussing how to make sure that AI development “is used for the benefit of all, and to avoid enabling uses of AI or (artificial general intelligence) that harm humanity” and also to ensure it does not become “a competitive race without time for adequate safety precautions.” OpenAI is also a member of the Partnership on AI.

Many other AI ethics groups and programs are doing important work on these issues, including:

· AI Now

· Anthropic

· Future of Life Institute

· Future of Humanity

· Center For Human-Compatible AI at UC Berkeley

· Center for The Governance Of AI at Oxford

· Leverhulme Center for The Future of Intelligence

And this is just the tip of the iceberg. The amount of interest surrounding AI ethics and safety dwarfs all other fields and issues. I sincerely doubt that ever in human history has so much attention been devoted to any technology as early in its lifecycle as AI.

Moreover, contrary to the complaint by some AI critics that we aren’t having enough serious conversations about AI risks today, a good argument could be made that we have too many conversations going on about AI currently and the biggest problem is really one of coordinating those conversations, not just calling for more of them.

Toward Greater Coordination of AI Ethics & Best Practices

While greater coordination of all these AI ethical best practice efforts will be needed going forward, it doesn’t necessarily need to come in the form of heavy-handed, top-down, one-size-fits-all regulatory regimes — either domestically or globally.

Gary Marchant and Wendell Wallach have proposed the formation of what they call governance coordinating committees (GCCs) to potentially solve this problem. GCCs would help coordinate technological governance efforts among governments, industry, civil society organizations, and other interested stakeholders in fast-moving emerging technology sectors, including AI and robotics. Because “no single entity is capable of fully governing any of these multifaceted and rapidly developing fields and the innovative tools and techniques they produce,” they suggest that GCCs could act as a sort of “issue manager” or “orchestra conductor” that would “attempt to harmonize and integrate the various governance approaches that have been implemented or proposed.” They have also called for the formation of an International Congress for the Governance of AI as “a first step in multistakeholder engagement over the challenges arising from these new technological fields.”

Marchant and Wallach are not envisioning this to be a formal regulatory body, however. They are proposing the creation of a global AI quango, or quasi-autonomous nongovernmental organization. Quangos are NGOs that have a more formal role in the governance of a certain field or technology. Sometimes governments even delegate certain official tasks or responsibilities to quangos that would not usually be carried out by NGOs.

Much of the hard work of AI standard-setting and risk management has already been done by quangos like the ISO, IEEE, and ACM. Their ethical frameworks and standards serve as a baseline for global governance coordination, including the professionalization of the AI auditing and impact assessment process. What Marchant and Wallach are suggesting with GCCs could help provide another mechanism whereby AI governance issues are addressed through ongoing collaboration among various parties, both domestically and globally. We might think of this as an effort to identify the very best of all the best practices out there today — and then get far more serious about the actual process of “embedding” those ethics or safety guidelines into algorithmic processes.

There might be other existing groups that can help facilitate this process. The Global Partnership on Artificial Intelligence (GPAI), launched in 2020, is a broader multi-stakeholder initiative looking to address global AI governance issues in an even more comprehensive fashion. The OECD oversees this effort, which currently includes 25 member states. The goal is to bring together diverse actors and foster international dialogue and cooperation on best practices that the OECD originally laid out in its 2019 “Recommendation on Artificial Intelligence.”

Toward Polycentric Governance

But many other non-governmental international bodies and multinational actors can play an important role as coordinators of national policies and conveners of ongoing deliberation about various AI risks and concerns.

We may be able to learn some valuable lessons from the first quarter century of Internet governance. A diverse array of NGOs worked together using ongoing multistakeholder negotiations to address a variety of Internet governance issues. Some of the most important organizations included the Internet Society (ISOC), the Internet Engineering Task Force (IETF), the Internet Governance Forum (IGF), the In­ternet Architecture Board (IAB), and the World Wide Web Consortium (W3C). These groups worked with governments, industry, civil society groups, univer­sity centers, and other interested parties to create technical standards for the Internet in an iterative, collaborative fashion. The UN Internet Governance Forum also works with these organizations to help coordinate governance issues.

Many in the field of Internet governance regularly use a phrase made popular by early Internet engineers to describe how they kept systems operating through “running code and rough consensus.” This notion, which became the unofficial operational motto of the IETF, reflected a pragmatic governance philosophy of continuous iteration and improvement. Perfect agreement on all governance matters was considered impossible, but a rough consensus about operational norms became crucial if systems were to grow more robust and reliable. Equally important were the constant tweaks to those systems and the software that powered them.

In this way, Internet management today embodies a more flexible and “polycentric” style of governance, with many different actors and mechanisms playing a role in ensuring a well-functioning system. As applied to AI governance, Peter Cihon, Matthijs M. Maas, and Luke Kemp have noted that arguments in favor of polycentricity include, “the notion that it enables governance initiatives to begin having impacts at diverse scales, and that it enables experimentation with diverse policies and approaches, learning from experience and best practices.” In other words, polycentricity is just another way of conceptualizing the various decentralized governance ideas and soft law mechanisms I have identified in various other essays.

With AI systems and applications building on top of Internet infrastructure and protocols, it is likely that this pragmatic governance philosophy of “running code and rough consensus” — and many of the organizations that make it work — will play a continuing role in overseeing some of the AI-related issues going forward. This represents a pragmatic, bottom-up approach to flexible “on the ground” governance. It’s the right path forward for artificial intelligence and the Computational Revolution more generally. And the great news is, it’s already happening all around us today.

___________________

Related Reading by the Author on AI & Robotics

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer