Review of Reframing Superintelligence: Comprehensive AI Services As General Intelligence
This post also appears on Goodreads and can be viewed here.
For AI historians and researchers, especially those interested in the far future of AI, this is probably the most significant work published in this space since the book Superintelligence by Nick Bostrom. Others have summarised the work so I won’t try to duplicate the effort, Rohin Shah in particular has done an excellent job with his summary which is available to read here: https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as
Here is Rohin Shah’s overview of the CAIS model:
The core idea is to look at the pathway by which we will develop general intelligence, rather than assuming that at some point we will get a superintelligent AGI agent. To predict how AI will progress in the future, we can look at how AI progresses currently — through research and development (R&D) processes. AI researchers consider a problem, define a search space, formulate an objective, and use an optimization technique in order to obtain an AI system, called a service, that performs the task.
A service is an AI system that delivers bounded results for some task using bounded resources in bounded time. Superintelligent language translation would count as a service, even though it requires a very detailed understanding of the world, including engineering, history, science, etc. Episodic RL agents also count as services.
While each of the AI R&D subtasks is currently performed by a human, as AI progresses we should expect that we will automate these tasks as well. At that point, we will have automated R&D, leading to recursive technological improvement. This is not recursive self-improvement, because the improvement comes from R&D services creating improvements in basic AI building blocks, and those improvements feed back into the R&D services. All of this should happen before we get any powerful AGI agents that can do arbitrary general reasoning.
My perspective as a software developer is to see this reframing (CAIS vs AGI) in terms of different teams working in different ways but with similar end goals. For those familiar with Google’s AI efforts, ‘Google AI’ (https://ai.google/) is the brand used to describe both their AI and computer science research, ‘Cloud AI’ or ‘AI & Machine Learning Products’ (https://cloud.google.com/products/ai/) is the brand used to describe their various AI services available to developers (e.g. Cloud Vision API), ‘Google Brain’ is a general AI research team, and DeepMind is a separate company (but under the same Alphabet corporate owner as Google) also working on AI research but with a slightly different commercial focus and set of goals to Google Brain.
Why mention these? The most memorable concept I took away from Drexler’s CAIS model is that general intelligence doesn’t need to look like an agent created by a team with the explicit goal of building a generally intelligent agent (arguably the goal of organisations like DeepMind, OpenAI, etc). It might instead look like a product offering, what Jeff Ding has called the ‘App Store model’, or what Drexler calls cloud services (e.g. the Google Cloud Platform), in which we have reached a stage where we have access to general intelligence ‘as-a-service’ because of the proliferation of AI services. The ‘comprehensive’ in Comprehensive AI Services thus maps onto the ‘general’ in Artificial General Intelligence.
There is a warning here about our tendency to anthropomorphise things that we don’t fully understand, but Drexler leaves the idea that this is indeed a warning implicit. Drexler has taken great care to do careful academic research, and the sheer amount of interesting ideas in this one technical report can be intimidating at times. Like Superintelligence before it, this report deserves to be read and re-read. Drexler is light on philosophising, but the profound implications of this work should be clear to all interested researchers in the field.
“The emerging trajectory of AI development reframes AI prospects. Ongoing automation of AI R&D tasks, in conjunction with the expansion of AI services, suggests a tractable, non-agent-centric model of recursive AI technology improvement that can implement general intelligence in the form of comprehensive AI services (CAIS), a model that includes the service of developing new services. The CAIS model — which scales to superintelligent-level capabilities — follows software engineering practice in abstracting functionality from implementation while maintaining the familiar distinction between application systems and development processes. Language translation exemplifies a service that could incorporate broad, superintelligent-level world knowledge while avoiding classic AI-safety challenges both in development and in application. Broad world knowledge could likewise support predictive models of human concerns and (dis)approval, providing safe, potentially superintelligent-level mechanisms applicable to problems of AI alignment. Taken as a whole, the R&D-automation/CAIS model reframes prospects for the development and application of superintelligence, placing prospective AGI agents in the context of a broader range of intelligent systems while attenuating their marginal instrumental value.”
“The concept of AI-as-mind is deeply embedded in current discourse. For example, in cautioning against anthropomorphizing superintelligent AI, Bostrom (2014, p.105) urges us to “reflect for a moment on the vastness of the space of possible minds”, an abstract space in which “human minds form a tiny cluster”. To understand prospects for superintelligence, however, we must consider a broader space of potential intelligent systems, a space in which mind-like systems themselves form a tiny cluster.”
“Looking forward, I hope to see the comprehensive AI-services model of general, superintelligent-level AI merge into the background of assumptions that shape thinking about the trajectory of AI technology. Whatever one’s expectations may be regarding the eventual development of advanced, increasingly general AI agents, we should expect to see diverse, increasingly general superintelligent-level services as their predecessors and as components of a competitive world context. This is, I think, a robust conclusion that reframes many concerns.”