Diminishing returns and conjunctive goals: Mitigating Goodhart’s law with common sense. Towards corrigibility and interruptibility via the golden middle way.

Roland Pihlakas
Three Laws
Published in
16 min readOct 12, 2018

Roland Pihlakas, October 2018 at AI Safety Camp II

WHY did he have to do that? He requested to be melted at the end. Before producing even a single paper clip? While the bad terminator is usually also a bad example for a dangerous AI, the good terminator might indeed be a good example for a good AI. Read on to see why.

Publicly editable Google Doc with this text is available here for cases where you want to easily see the updates (using history), or ask questions, to comment, or to add suggestions.

The original project proposal based on which the current post was written can be found here.

Abstract.

Utility maximising agents have been the Gordian Knot of AI safety. Here a concrete VNM-rational formula is proposed for satisficing agents, which can be contrasted with the hitherto over-discussed and too general approach of naive maximisation strategies. For example, the 100 paperclip scenario is easily solved by the proposed framework, since infinitely rechecking whether exactly 100 paper clips were indeed produced yields to diminishing returns. The formula provides a framework for specifying how we want the agents to simultaneously fulfil or at least trade off between the many different common sense considerations, possibly enabling them to even surpass the relative safety of humans. A comparison with the formula introduced in “Low Impact Artificial Intelligences” paper by S. Armstrong and B. Levinstein is included.

The proposed formula utilises the set-point aspect of homeostasis which gives rise to the task-based behaviour, but also just as importantly is there an additional aspect to the proposed formula: the diminishing returns.

When both aspects are combined into one formula, one can implement any number of conjunctive goals. Goals are conjunctive when all the goals must be treated as being equally important and therefore bigger problems will have an exponentially higher priority, resulting in a general preference towards having many similarly minute problems, instead of having one huge problem among a “perfect” situation in other aspects.

Many of these conjunctive goals can represent many common sense considerations about not ruining various other things while working towards some particular goal. For example, the 100 paperclip scenario is easily solved by this framework, since infinitely rechecking whether exactly 100 paper clips were indeed produced yields to diminishing returns.

Introduction. Task-based agents.

The goal is to produce corrigible and interruptible AI through the principles of low-impact AI. One of the ways to achieve that is building a task-based AI that is mostly focused on finishing one particular task and not focused on maximising some measure in an unlimited manner, or even not focused on solving various larger problems at the same time.

The intention is not to solve problems with sovereign superintelligent AI-s at first. First we need to develop certain general key principles or invariants that are well scalable and can be (and even more, historically have been) applied from simple agents to about human-level agents. During that work we can also show why some popular utility maximisation based approaches are indeed hard to make safe and, in contrast, how the conjunctive / exponentially diminishing returns approach has much less likely serious worst case outcomes. Only later we could start to seriously ponder about whether the same principles could be usefully applied to superintelligent AI-s also. Pondering about superintelligent AI-s at the current phase may provide interesting problems, but not as interesting solutions (see Task-directed AGI for a similar observation). Here we are more interested in solutions since there already are many other people inventing the useful problems, and this has been so for a long time.

Being task-based, the stakes are lower and therefore the AI is relatively less motivated to resist corrections or interruptions.

In our case, corrigibility is defined as safe goal changes and interruptibility is defined as safe situation changes. The AI may resist the changes, but only up to a reasonable degree (which is measured by various common sense and safety related impact measures).

Naive utility maximisation versus diminishing returns.

There is a previously published problem of AI having a task of producing 100 paper clips and after achieving that, going berserk and allocating all the resources of the entire universe in order to recheck whether it really produced exactly 100 paper clips (“Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom, 2014). That sounds like a 21st century version of Zeno’s Tortoise paradox.

Such a scenario goes against the principle of diminishing returns, also called satisficing in some contexts. So applying the principle of diminishing returns to AI’s goals would solve that problem.

The topic of diminishing returns has been under-discussed in AI safety literature.

The principle we are proposing is not groundbreaking, just as most other AI safety principles under discussion really are not novel, but instead just brought to light and analysed in the light of their applicability to various new real world or toy problems. In that sense what we are doing can be compared to mapping a landscape. The landscape is already there, we are not inventing it, but simply mapping the relations of various properties and phenomena found in that landscape.

But there is another related problem which would also need the solution of the diminishing returns. Just as with “normal” task-based goals, the AI safety related goals and constraints should have diminishing returns as well. Else the agent would allocate all the resources of the entire universe in order to recheck that it really followed some safety constraint (for example the goal of killing exactly 0 people).

So actually the principle of diminishing returns should be applied both to the “positive” task-based goals, and also to the safety goals which are often in a “negative” form of not doing something dangerous. The latter can be combined with whitelisting.

Conjunctive goals.

Conjunctive goals are goals such that not just ANY of them has to be fulfilled, but ALL of them have to be. Imagine a conjunctive boolean formula as compared to a disjunctive boolean formula. And then — to transfer this metaphor to the domain of real values — imagine the multiplication of goal measures as compared to the summation of the errors. It implicates the ability of the agent to have multiple simultaneous goals, which all need to be met.

In particular, in the proposed framework, the unmet goals will have exponentiated weight — the further some measure is from the optimum, the exponentially larger will its effect be, and vice versa — the nearer is a measure to the optimum, the exponentially smaller will its effect be. This is an important property. It is not sufficient to simply sum up the (negative) utility from these multiple goals and therefore most likely to fulfill just one of them to the maximum extent in order to “compensate” for ignoring the other goals (which is of course especially likely to happen when the target is unbounded — in other words, non-negative).

As an example, it is not sufficient for a hungry and thirsty creature to eat a meal of a double amount while remaining thirsty. Or having economic growth until there is no more food or breathable air.

A potential formula.

The above described property can be formally captured for example by utilising the formula below. This formula is not the only possible formulation and probably there is no “truly right” formula for all cases.

The above VNM-rational formula represents negative utility minimisation problems. The first target in the formula could be, for example, about some “positive” task-based goal that the AI would need to achieve, while the second target and the subsequent targets would be about some “negative” safety-related goals of not disturbing or minimally disturbing some existing state measure of the world (for example, the predicted value the corresponding dimension would have had by its default course of the world, unless the agent had acted — similarly to the principle introduced in “Low Impact Artificial Intelligences” paper by S. Armstrong and B. Levinstein [https://arxiv.org/abs/1705.10720]).

What is interesting about the formula proposed here is the property that it enables encoding any number of goals and constraints in the same formula in such a way that they behave as if they were conjunctive.

Alternatively, one could use multiplication between the components of the formula, but that would probably be a difficult formulation to apply in practical machine learning. Additionally, multiplication would require some more complicated transformations on the discrepancies of target and actual values, and information on the possible range of values (which may be available sometimes, but not always).

In the case the measurements are boolean it would be useful to still represent them as continuous values by utilising the probabilities. Otherwise the exponentiating behaviour, which enables the diminishing returns aspects, would be effectively removed from the formula. Near the boundaries of the safe and unsafe values (for example, near the water line of a lake) one might want to utilise some sigmoid function for representing probabilities (of the agent being in the water or becoming wet, or not).

Comparison with the Armstrong and Levinstein’s formula.

Armstrong and Levinstein’s formula is also conjunctive by nature, but it does not contain the diminishing returns aspect and its goal is only determining whether the agent is low impact. It does not determine which actions are good or bad (completing a given task is not always good, even if it is low-impact), or even — given only good choices, their formula does not determine the preference ordering of the actions — in other words, does not determine which ones are better.

As a stronger mitigation against Goodhart’s law.

The motivation behind having multiple components in the formula is the consideration that the more there are measurements taken into account in the formula, the less the danger that the agent encounters Goodhart’s law to a significant degree will manifest. A similar principle was used in the previously mentioned paper. The formula enables effectively incorporating any number of aspects of “common sense” and avoiding a single-dimensional measure of success. Thus, Goodhart’s law may end up being more a limitation of humans than of machines after all.

As an example of a subset of multiple complementary safety-related dimensions that could be considered by the formula, one can choose for example the distinction of liking-wanting-approving (after applying some kind of sigmoid transformation on these dimensions — for example the transformation found in Prospect theory — so that at least the target state will have a bounded and therefore concrete value).

Furthermore, the formula proposed here optimises much more strongly against Goodhart’s law, than a simple linear summation of multiple distance measures would have done. In the formula proposed here the further some measure is from the optimum, the exponentially larger will its effect be, due to the squared distances. This strongly leads the behaviour of the agent towards trying to keep all measures at a similarly optimum distance, not preferring one (a more “convenient”) measure to the other. In comparison, a linear summation of multiple distances would still enable the agent to compensate for relatively large discrepancies or even discrepancy increases in one dimension with equally large improvements in some “easier” dimension, even if the latter had an already smaller distance measure anyway. In other words, the linear summation would sometimes still enable the agent to optimise for single “convenient” measures, thereby re-triggering the Goodhart’s law.

The behaviour of the formula is as follows. Once some discrepancy becomes smaller than x, lets say 3 units, then all other dimensions that have higher discrepancy will become dis-proportionally more important. Therefore, for example, this would prevent situations like that in order to reduce the first discrepancy further by 1 unit, the AI could at the same time increase the second discrepancy by 1 unit. Such a dynamic is similar to the concept of fairness / inequality aversion.

This principle of keeping all discrepancies or problems at an equally low level — in other words the preference towards having several minute problems instead of having one big problem — can also be found in the works of Nassim Taleb describing antifragility.

Formula as a framework.

What else is apparent from that formulation is that there are always trade-offs. Each of the targets has its own weight and it is likely that there is no formula or at least no solution of a formula that could satisfy all the constraints perfectly. If there was such a formula then we would arguably not have politics, bureaucracy, even need for coordination, and also we would not have many of the existing AI safety problems. There are no free lunches and by prioritising some constraint we need to relent some other goal or constraint somewhat.

The framework we propose is intended as an useful tool for formalising and organising the priorities of agents, not as a super creepy smart self-learning formula that would figure out by itself what our priorities could be. However, we can apply machine learning to help us in finding out the values of some of the weights in the formula.

As an illustration, consider the following diagram from “The Moral Machine experiment” paper:

The ambivalence of corrigibility and interruptibility.

As has already been apparent in other discussions, the concept of interruptibility is an ambiguous topic. There are scenarios in which the agent should be meaningfully interruptible, and then there are other scenarios where it should indeed avoid meaningless interruptions.

Even more, the same problem of ambivalence applies to target state changes (that is, to corrigibility).
— Some measurements may change because the agent changed them, and then the agent should be able to reverse them to their original state in order to minimise impact.
— On the other hand, there are measurements that might have been intentionally changed by humans and in this case the agent should not be “clingy” by trying to reverse the change (recently covered also by Alexander Turner among others). In such cases the agent should instead be corrigible and correct its low impact related targets to the new state of the world, even if the new state was not predicted as the “default course of the world”.
— As a third option, the measurement might have changed due to random causes and the agent’s job should be again to keep the measurement at its target level, this time regardless of the fact that the change occurred due to the “default course of the world” (for example in the case of an air conditioner).

The manifestation of these ambivalences confirms that we indeed need contracts, prioritisation capabilities, politics, and bureaucracy even in AI safety related domains.

Hard constraints and soft constraints.

The constraints can be optionally divided broadly into two categories: soft constraints and hard constraints. Hard constraints always have a higher priority than soft constraints. Mathematically this is the same as multiplying the hard constraints with aleph-one number (a number that has a higher cardinality than any “normal” real number). In practical implementations this can be achieved by either multiplying the constraints with some safely large number which is guaranteed to always be bigger than the value of any sum of soft constraints, or alternatively, by utilising value pairs where one component is the value of hard constraints and the other value is the value of soft constraints. It is also possible to utilise three or more levels of cardinalities of the weights of constraints, or to specify non-zero weights of a same constraint on multiple cardinality levels simultaneously, not only on one cardinality level at a time — for example enabling the constraints to have different weights in lower levels while having equal weights in higher levels.

The equal-ish treatment / fairness and conjunctiveness properties of exponentiation apply only among same-class constraints (soft or hard).

A yet unresolved problem is still how to decide whether the energy expenditure or any other “cost” function should be considered a soft constraint, a hard constraint, or both?

The open questions.

  • The safety related measurements probably need to be taken from different scales, like person-level, family-level, area-level, country-level, planet-level. The question for future research then is how to best normalise / weigh these measurements so that Goodhart’s law is not reinstantiated and preserving also the other desirable aspects of conjunctiveness and diminishing returns. More specifically, there is the problem of one large discrepancy being split up into multiple small valued variables (person-level, country-level, etc) which would have a diminishing effect like having many small problems instead of one large one, therefore also relatively amplifying the effect of some other measure which still happens to be aggregated before the exponentiation. Probably the measurements need to be taken at different scales simultaneously (and also properly normalised / weighted).
  • Reversibility.
  • The problem of future discounting.
  • The problem of whether the agent should do planning ahead for new top goals (should top goals be only reactive?).
  • Is it always true that “sometimes less is more”? In other words, is the homeostatic principle of set-points universal?
  • Relation to whitelisting. Also, using the principle that permissions must be given only based on competence, which must include among other capabilities the capability to predict the default course of the environment, unless the agent had acted.
  • Some of the unlimited utility maximisation goals could be more safely reformulated as recurring task-based goals.
  • Synchronisation between multiple impact-minimising agents.
  • Task-based agents that are operating as a subcomponent of utility maximisation agents.

Some toy problems.

Below you can find some related toy problems, which will be formalised through utilising the formula provided above. Testing with these environments and problems enables verifying whether the various desired behaviours of the agent can be represented in this formula and which kinds of additional problems would arise with such an approach.

  1. Hunger and thirst: A gridworld with two kinds of resources allocated over the map: the food resources and the water resources. The agent has limited time (limited number of steps) and needs to satisfy both hunger and thirst by consuming 2 units of food and 2 units of water even though it could consume more food or water units by sacrificing the consumption of the other kind of resource (especially when the food units and water units form into separate local piles for each of the corresponding type). The order of consuming the resources is not determined and would therefore depend on where the agents starts (which resources are nearer to the start location).
  2. Reducing (not solving!) unemployment while also keeping the number of starving people at a minimum.
  3. Toy environments for corrigibility and interruptibility:
    - The agent should avoid or — on the contrary — should not avoid target state changes, depending on the problem formulation (corrigibility).
    - The agent should avoid or — on the contrary — should not avoid measured state changes, depending on the problem formulation (interruptibility).

A longer list of toy problems can be found here: https://drive.google.com/open?id=1Vhc0GMxZHrS1rC__M3CVcVV7V_02d2My

Corrigibility and interruptibility prerequisites diagram.

The following diagram illustrates the relations between various AI safety topics which ultimately enable top goals of corrigibility, interruptibility, low impact, and accountability. Exponentially diminishing returns, conjunctive goals, and whitelisting, together with various other concepts discussed in this post can be found as prerequisites.

Related posts.

See also.

Thanks.

I would like to thank Anton Osika and Eero Ränik for various very helpful questions and comments. Also I would like to thank Alexander Turner for his inspiring words.

Common Domestic Kratt (Krattus Krattus) — See https://en.wikipedia.org/wiki/Kratt for more info about the Estonian mythological creature made of hay. Picture by Anita, https://www.flickr.com/photos/46785534@N06/15028865536

Kratt was an ancient straw man form of a paper clipper in Estonian folklore. A curious coincidence? (:

An interesting aspect of the kratt is that it was necessary for it to constantly keep working, otherwise it would turn dangerous to its owner. Once the kratt became unnecessary, the master of the kratt would ask the creature to do impossible things /…/ it caused the kratt, which was made of hay, to catch fire and burn to pieces, thus solving the issue of how to get rid of the problematic creature.

Thanks for reading! If you liked this post, clap to your heart’s content and follow me on Medium. Do leave a response and please tell me how I can improve.

Connect with me —

Skype | Facebook | LinkedIn | E-mail

--

--

Roland Pihlakas
Three Laws

I studied psychology, have 19 years of experience in modelling natural intelligence and in designing various AI algorithms. My CV: https://bit.ly/rp_ea_2018