Deep Dreams of Gödel Machines

A practical approach towards artificial superintelligence

Art by Google’s Deep Dream, courtesy of Memo Atken

Going into my senior year of high school, roughly two and half years ago, I simultaneously became fascinated by AI and blockchains. I had just gone down the rabbit hole of learning about Ethereum earlier that summer, and I very quickly became obsessed with the idea of decentralized autonomous organizations (DAOs). I viewed cryptoeconomics as a methodology for designing massively scalable and efficient organizations, and smart contract protocols as a means for rapid implementation and experimentation of these novel governance structures. In particular, I was initially very interested in futarchy, in which decisions are made by prediction markets. I soon realized that public blockchains themselves can be viewed as massively scalable organizations, with well-defined goals and strong incentives to follow the rules. Ultimately, I viewed blockchains as a fundamental infrastructure for effectively harnessing humanity’s collective intelligence.

If I were to design an ideal X from scratch, what properties would it have?

I typically ask myself this question before reasoning about any technology I find promising. I asked myself this question when considering the ideal properties of a distributed ledger technology before joining the brilliant Hashgraph team. (A huge special thanks to Jordan Fried, who found me through Twitter and then generously invited me to meet the team in SF for TechCrunch Disrupt). I asked myself this question again very recently before designing a decentralized stablecoin framework with the incredible Carbon team, and when thinking about the properties necessary for a currency to reach mainstream adoption as a true global medium of exchange. I constantly ask myself this question when thinking about AI.

If I were to design an ideal AI architecture from scratch, what properties would it have?

I have never considered myself to be an AI researcher by any means, but I decided to attend NIPS 2016 in Barcelona to meet and listen to some of the top researchers in the field. The conference was in December, and I had just taken an introductory course at John Cabot University in Rome (I had an unusual but awesome college experience), taught by professor John Ewing. My final project, which I presented to the two other students in the class, was on deep learning-based mathematical theorem proving, where I applied a convolutional neural network to the HolStep dataset provided by Francis Chollet et al. I was closely following cutting-edge research that applied machine learning to theorem proving. I had also been closely following a self-defining blockchain initiative called Tauchain, where I was introduced to the idea of “code-for-money markets”, also known as proof markets.

I studied efforts that aimed to make theorem proving as efficient as possible, because I considered efficient theorem proving as a key component of what I believed (and still believe) to be an “ideal AI architecture”, a (theoretical) framework for recursively self-improving computer programs known as Jürgen Schmidhuber’s “Gödel machine”.

From the Gödel Machine Home Page:

“Gödel machines are self-referential universal problem solvers making provably optimal self- improvements. A Gödel machine rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code.”

Before his talk at the NAMPI workshop at the conference, Schmidhuber asked his audience, some of the top AI researchers in the world (plus me), who was familiar with the Gödel machine concept. About one-fifth of the attendees raised their hands, thus confirming my suspicions at the time that most experts in the field were not thinking about superintelligence seriously.

The current state of AI research is heavily focused on deep learning architectures, which are very useful today in narrow applications such as facial recognition or self-driving cars -- tasks that do not require symbolic reasoning. Deep learning architectures are doing incredible things for the world, and I do believe there is still a lot of untapped potential from their continual improvement. However, it seems there is an over-concentration of research into machine learning techniques that are useful today, but that may not be the best foundation for optimal superintelligence of the future. Regardless of when AI experts predict the emergence of superintelligence, we can never achieve a global maximum by trapping ourselves in local maxima. We need to explore new foundations and constantly question our assumptions.

Massively Collaborative Theorem Proving and Software Development

A Gödel machine is a framework for bootstrapping superintelligence. How can we design a framework for bootstrapping a Gödel machine?

Automated theorem provers can only produce very concise proofs due to a combinatorial explosion in search space. Modern machine learning techniques are not very good at symbolic reasoning and are therefore poor theorem provers relative to humans (although I do think it’s great there is ongoing research in this area). My hypothesis is that we can maximize the efficiency of theorem proving if we make it massively collaborative amongst sufficiently incentivized humans, who may automate their processes over time.

The Curry-Howard Isomorphism states that math proofs correspond to programs in a certain logic. Similarly, a proof’s propositions correspond to a program’s types, which can be thought of as its specifications or requirements. In order to make theorem proving (and thus programming) as efficient as possible, we need to make the process of constructing a proof massively collaborative.

Surprisingly, there have been very few efforts on making mathematics a collaborative effort. Perhaps the most promising approach is introduced by Théo Zimmermann in his article, “Getting thousands to millions of people working on a single mathematical proof”:

“What I propose is a fully interactive tool. As with most theorem provers in interactive mode, you can first state a theorem that you would like to prove. Proving it will require successive applications of tactics which will break the initial goal in several sub-goals. A classical theorem prover would present to you these sub-goals one by one. A collaborative theorem prover would take each of these sub-goals and present them to different (possibly random) people, in parallel. The initial user would be one of them. Maybe she would be given the privilege to choose what sub-goal she wants to prove first. Each user would then try to prove their assigned sub-goal by applying a tactic and generating new sub-goals to be distributed to collaborators once more.”

This approach can utilize a widespread, existing interactive proof assistant such as Coq. After a long email exchange, Théo recently published a follow-up article to account for the fact that the proof assistant itself may not be formally verified and produce an undesired “proof of False”.

Incentivizing Efficiency

"Proof markets" are also a very new idea, worked on independently by Ohad Asor of Tauchain and Yoichi Hirai of the Ethereum Foundation. Essentially, users can attach a bounty reward (i.e. 0.5 ETH) to mathematical conjectures they want other people to prove. Automated proof checkers can validate a supplied proof and trigger the escrow smart contract to distribute the bounty to everyone who contributed to the proof’s construction.

The reward for a subgoal could be equal to the time taken to complete the subgoal divided by the total time taken to construct entire proof, multiplied by the proof bounty reward. Essentially, subgoals that take longer to solve are assumed to be more difficult and therefore receive a higher reward. It is important to note that many people may be competing to solve a subgoal first in order to receive any reward, which would further incentivize efficiency. Hashgraph seems particularly suitable as an underlying distributed ledger if it were to have a permissionless network, considering it would likely have sufficient transaction throughput with provably fair consensus timestamps. Perhaps this mechanism of incentivizing efficiency could be generalized to any collaborative microtask. Hashgraph’s fair consensus timestamping is a powerful feature for applied cryptoeconomic design.

Marketplace for Interfaces

In the long term, one can imagine a marketplace for theorem proving interfaces emerging in order to increase not only efficiency, but accessibility. I’m particularly a big fan of interfacing theorem proving as a simple game that people without formal training in mathematics can quickly learn to play. Interfaces, if not free, could charge an upfront cost or take a small cut from the user’s subgoal rewards. It would be ideal if non-mathematicians could participate in the collaborative theorem proving network, but there still needs to be more research in this area.

Maximally Efficient Software Governance, or a Crowdsourced Gödel Machine

I’ve just described an approach towards maximizing the efficiency of the construction of proofs or programs in a certain logic. If the collaborative interactive theorem proving network becomes efficient enough, users may attach bounties that incentivize the improvement of a program with respect to a set of objective functions.

If I were to design an ideal software governance structure from scratch, what properties would it have?

I believe an ideal software governance structure would have provably beneficial, automated decision-making. The method described here can be generalized to automate the governance of any piece of software (i.e. smart contracts), enabling programs to be developed and improved upon as rapidly as possible in a large-scale collaborative setting. I briefly described this approach in an article I published a few months ago, and have since refined it.

The goal is to fully automate governance over a program by formally defining a set of weighted or ordered objective functions and requiring every update to have a mathematical proof of an optimization. Users need to rank or weigh their objective functions because there typically exists several Pareto optima otherwise. Of course, a user may only want only one criterion to be optimized, such as a program’s efficiency. Once the objective function(s) are well-defined, there are no disputes on whether or not to accept an update proposal. Voting is automated by proof checking.

Defining a program’s objective functions would probably be hard. Defining objective functions that represent a consensus of preferences expressed by stakeholders in a decentralized autonomous organization, on the other hand, would be much harder. It’s important to make the distinction between creating a framework for superintelligence and creating a framework for beneficial superintelligence. The average user must be able to easily formalize their preferences if we want a superintelligence to act on behalf of (the majority of) humanity, rather than on behalf of a single entity with their own interests. We can only realistically achieve this by crowdsourcing a superintelligence from the ground-up. It must be a global, collective effort from the very beginning. It is precisely this reason which piqued my interest in blockchains years ago in the first place.

I am not an AI researcher. I am not a mathematician. I am not a rocket scientist. I hope this article inspires experts across disciplines, in academia and industry, to start thinking about and discussing this approach towards massively collaborative mathematics and software development, and ultimately beneficial superintelligence. Fin.

Special thanks to Jürgen Schmidhuber, Vitalik Buterin, Ohad Asor, and Théo Zimmerman for conversations and ideas which contributed to this post.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.