The Orthogonality Thesis Is Not Relevant

Peter Voss
2 min readJan 24, 2017

--

The Orthogonality Thesis (OT) was raised as an objections to my essay ‘Improved Intelligence Yields Improved Morality’. This is a brief reply.

The Orthogonality Thesis: “The idea that the final goals and intelligence levels of artificial agents are independent of each other.”

Firstly, I want to clarify that my comments specifically apply to general intelligence. I don’t believe that one can put narrow (artificial) intelligence on an intelligence scale — therefore OT can really only apply to AGI.

Here are two lines of reasoning to indicate that OT will not be a meaningful factor in real world applications of AGI:

  1. Systems with high levels of real intelligence will by design have to be generally intelligent — otherwise they will not be very useful, and will thus not get funded. Truly intelligent systems will need to have a wide range of general knowledge and common sense. They will not have single narrow ‘tunnel-vision’ goals, but will inherently have competence over a wide spectrum of tasks, such as properly understanding instructions that they are given, gathering sufficient information to provide good advice, and applying broad, good reasoning to their goals. All complex goals require a very similar set of base cognitive skills — goals and intelligence will therefore, to a large degree, automatically overlap and be aligned.
  2. Inherent and inseparable characteristics of AGI, including those that foster ethics, will automatically restrict the kinds of goals that humans will ask an AGI to pursue. Thus an AGI’s intelligence will help identify goals that are by their nature counterproductive to humans purpose.

Orthogonality is undermined both by the large range of common (sub-) goals required by AGI, and by the fact that AGIs will inherently help to narrow down worthwhile goals.

--

--