Comments on the Rise of the Strategy Machines
Andy Singleton
1

Brad Power asks: “How would you distinguish that part of the cognitive layer that’s being pointed at strategic decisions? I imagine there are non-strategic things that the cognitive layer is doing, like observing each web service to tell when it’s out of range, or looking for patterns in customer behaviors.

My response:

Separating the details from the long-term plan is an anthropomorphic way to think about things. We must do this because humans, when you count up the bits, process a very small amount of data compared with modern computer systems. We make up for this by having extremely effective biases about what is important, and we instinctively assemble it into objects and models. It’s possible that computers will also use this strategy. However, in my experience computers see completely different objects and models from humans, and it seems more likely that they will have a more continuous view of details/short-term versus models/long-term.

Nick Bostrom of Superintelligence speculates on the existence of a “strategy superpower”. This would allow an AI to predict and manipulate long-term consequences with great effect. One very interesting feature of a strategy superpower is that you would not know what it is doing. It is also manipulating you. By definition, it is smarter than you, since you have the normal power, and not the superpower. It’s likely that any explanation that it gave for its actions would be designed to motivate you to advance its long term goal (which you programmed in some primitive way), rather than to be truly illuminating. To me, this sounds like what normal-powered employees do when they pitch strategy. But, it the effect is exaggerated when you don’t know how the machine is thinking, and it will get more frustrating when you find that the machine is usually right. We’ll see more stories with titles like How This Hedge Fund Robot Outsmarted Its Human Master.

Bostrom notes that an AI with strategic superpowers would be prone to try to become a “singleton AI” that could block any other agents from interfering with its goals. It would succeed if there is some sort of positive feedback loop where being good at strategy allows it to get more resources, and that makes it better at strategy, etc. Obviously this is the origin of Terminator’s Skynet and many other science fiction plots. But, it is also a realistic way to think about industrial strategy in the world of big data.

Applied to an industry, a “singleton” is a monopoly. Where would an industry monopoly singleton come from? Bostrom notes a bunch of different paths to AI superpowers, in addition to the big data crunching computer. For example, it’s possible to make humans smarter over several generations. Or, you can seek “collective superintelligence” where a whole system of agents develops something like a strategy superpower. That’s what Google, Facebook, and Amazon will look like once they get all of their data rolling into longer term strategic planning. In the past there were basically two sources of monopoly: political power, and network effects. We are about to see the emergence of a third source — strategic capabilities.

Show your support

Clapping shows how much you appreciated Andy Singleton’s story.