The Ethical Regulator: Reflections on GSoC 2022

Brian McCorkle
Orthogonal Research and Education Lab
6 min readNov 5, 2022
First-order Cybernetic Regulator. SOURCE: Mick Ashby, Wikimedia (CC-BY-4.0).

When I first read Mick Ashby’s work on the Ethical Regulator Theorem, derived from his Grandfather W. Ross Ashby’s work on his Every Good Regulator Theorem, I was blown away and extremely excited. I still am.

The idea creating a system of ethics that is regulated in such a way that it cannot be used unethically is very powerful. It has become one of my favorite things to mention in talks and group discussions because it captures people’s imaginations. But the bigger question is: how could we implement such a system? Assuming we could do so, life with ethical regulation poses its own challenges and opportunities.

As part of Google Summer of Code 2022, I worked with Orthogonal Research and Education Laboratory to create original computational models using the Ethical Regulator as a starting point. Along with 2 other contributors, S. Hussain Ather and Himanshu Chougule, we formed a working group to develop rival approaches to the simulation of regulated group behavior. Hussain pursued on a model of the Big Five Cybernetic Personality Theory in relationship to group dynamics, and Himanshu developed a Multi-Agent Reinforcement Learning (MARL) algorithm for open-source communities, and I created several computational models of collective cognition based on Active Inference principles.

Yet we all started with an eye towards the Ethical Regulator. In our working group, we had many excellent conversations with mentors Bradly Alicea and Jesse Parent about the future of ethics in Artificial Intelligence and Open-source software communities. To me, it quickly became clear that, technical issues aside, the first step in implementing an Ethical Regulator would be to clearly and unambiguously define the goals and values of the organization. Not just in a static sense, but continuously: an Ethical Regulator must be continuously maintained and updated with new information if it is to function effectively.

This social aspect of the Ethical Regulator initially inspired me to propose interviews with members of various open-source communities about their ethics and approaches to community sustainability. While this is now a longer-term goal, there is a larger problem related to the human-assistant aspect of any Ethical Regulator. In my estimation, it is highly likely that no two organizations would operate in the same context and under the same sets of group dynamics. It is simply not feasible nor sustainable to construct an objectively perfect Ethical Regulator without accounting for this continuous aspect of the social collective.

Another problem with implementing an ethical regulator is related to the problem of representing an ethical framework to a computational model. Explainability is key, and as the Introductory graphic shows, ethics are defined by singular but broad concepts. It depends on your model, of course, but only through thoroughly defining each parameter and their values can you begin to accomplish this bizarre and necessary task. We could interview every human on the planet, find the ethical commonalities, and produce a computational representation of the regulator. However, this representation would change in the course of a day; tomorrow would offer a completely different environment.

Even if we could fully implement an Ethical Regulator using a human assistant, implement the goals and values of the organization perfectly, we then run into the problem of understanding: namely, is the computational model fully aware of its human companion’s task, future needs, and emotional state? Even if the Regulator functioned as described, regulating the system it’s assigned to and integrated within, this lack of awareness would pose a problem.

An example of this is summarized in what I’ll call the “agonistics problem”, which borrows from a term I first heard in the context of Chantal Mouffe’s political theory of Agonistic Pluralism. Consider the following:

Say you are a cheese company. You’ve heard about the Ethical Regulator and would like to implement a Regulator which assures your cheese will not be used Unethically. Your R&D department is top notch, and they make one for you. The Regulator is a Robocop style robot. Why? Well, the biggest use of cheese which goes against the values and goals of the company happens to be using cheese to poison people, either wrapping the poison with cheese or applying poison to cheese.

The poison problem demands constant vigilance if the Regulator is to perfectly ensure that the company’s cheese remains ethical, and so each cheese is watched by a regulator, updated constantly with the goals and values of the company, which include a “no murder with our cheese” value. But since the robot has no intuitive sense of morality, the entirety of its ethical behavior must be maintained through a closed-loop feedback system.

Now suppose that I go to the supermarket and purchase some cheese. A regulator follows me home, and when I get home my home regulator is also there watching me. Every house has its own regulator to ensure the home will not be used unethically with respect to the goals and values of the people who live there. The ethics (and regulator’s model of those ethics) are continuously updated, ensuring full compliance and/or enforcement.

The Terminator, but with cheese.

The cheese regulator approaches the house, with the intention of keeping an eye on the cheese to make sure there are no murders committed. Since the regulators are constantly updated, they are also constantly monitored by the company to make sure they are working properly. This amounts to an invasion of privacy, which conflicts with the home regulator, which has a directive to protect the privacy of the home’s inhabitants.

The two regulators square off to resolve this paradox. How will this be resolved? Should the humans step in?

This is the essence of the “agonist problem”, because neither regulator is antagonizing the other. In fact, they are acting according to the interests of their designers. In biochemistry, an agonist is a molecule that competes with other molecules to bind and activate a receptor. Likewise, our regulators find themselves at odds, but not through any malice, ill-will, or even competition. In fact, their conflict stems from an unfortunate clash of well-meaning and well-defined imperatives.

How do we address this problem? If AI is to be the ultimate solution to ensuring ethical behavior, how far should we go? The ongoing Effective Altruism (EA) debates are very revealing about this issue. The creepy feeling of fascism, of the need for a strict top-down structure which Regulates or Governs the whole system, is difficult to escape when discussing these issues. The “agonistic problem” could possibly be solved by applying decentralized strategies where goals and values are shared in some greater regulation space designed to mediate other regulators.

Regardless of how we choose to solve this issue in the general case, where cheese companies and private homes are involved, the fact is that this thought experiment has nothing to do with reality at the moment. When I think of Ethical Regulators, the most practical application is to social networks, continuously provided virtual services, and virtual communities.

Since these organizations are mediated by software, a computationally defined regulator could potentially operate relatively unobtrusively. It’s likely the mediation problems currently being discussed on Twitter could be addressed by such a regulator, if it were well defined and well maintained.

Yet when we look at open-source development communities, it is not so clear that an Ethical Regulator would serve such communities well. At the very least, such an implementation requires maintenance, definition, and design. In addition, it’s likely that just taking the time to regularly define an organization’s goals and values, even in small ways, would have enough beneficial effects for the organization’s sustainability and culture to eschew a formal regulator.

--

--

Brian McCorkle
Orthogonal Research and Education Lab

Composer, programmer, performer, researcher, and dabbler in almost every thing.