Organisations as an old form of artificial general intelligence
Roland Pihlakas, June 2015 — October 2017
Publicly editable Google Doc with this text is available here for cases where you want to easily see the updates (using history), or ask questions, to comment, or to add suggestions.
In my viewpoint organisations already are an old form of Artificial General Intelligence. They are relatively autonomous from the humans working inside them. No person can perceive, fathom, or change things going on in there too much. We humans are just cogs in there, human processors for artificially intelligent software. The organisations have a kind of mind and goals of their own — their own laws of survival.
They have some specific goals, initially set by us, but as it has been discussed in various places — unfortunately, the more specific the goals, the less will the utility maximisers do what we have actually intended them to do, and the more will there be unintended side effects.
Problems both in AI and sociology.
One description of the mechanics of the problem is described in my essay about self deception, side effects, and fundamental limits to computation due to fundamental limits to attention-like processes.
Another partially related reference is for example by Eliezer Yudkowsky (Goodhart’s Curse, https://www.facebook.com/yudkowsky/posts/10154693419739228).
See also Goodhart Taxonomy for a description of that partially related problem.
Since organisations do not have “children” they have not had evolutionary pressures to obtain “genes” that make them care about the future and really be synergetic with humans in a sustainable way.
Same will apply to our more novel AGI creations — intelligent machines, which will act as a new form of organisations (by providing services we already need, or will depend on in the future, etc). But unfortunately the situation will then be even more unbalanced than it is already now, since in contrast to old form of organisations, they will be even less dependent on humans and additionally, less transparent, while also becoming even more powerful and autonomous with the help from new technology.
To better counteract all of that above, I would like to help humans to become more enabled themselves. And secondly, to promote types of technologies that really are synergetic with humans and have “evolutionary properties” so that they can be tested in time.
Which leads most importantly to the proposal of the “Wise Pocket Sage”, and additionally also to the ideas of “Reasonable AI”, the idea for implementing a modified version of ”The Three Laws of Robotics”, and finally as a partial solution, the “Homeostasis-based AI”.
Goodhart’s law — Wikipedia
Excerpts: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
and “… when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it.”
Policy-based evidence making — Wikipedia
Surrogation — Wikipedia
Unintended consequences — Wikipedia