OpenAI: Some thoughts, mostly questions

David J Klein
4 min readDec 12, 2015

--

This week, during the machine-learning flagship conference NIPS, seemingly dominated by research coming out of Google, it was announced that Elon Musk, Sam Altman, and others would be funding a $1 billion non-profit AI research institute named OpenAI. To quote their website:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

This mission fits snugly into the dialog that Musk and others have established about the potential negative impact of AI on humanity, once it becomes so advanced that humans no longer have control over it (cue picture of The Terminator wearing a Singularity U Varsity jacket).

As Beau Cronin points out in my recent discussion with him, OpenAI went public early, in that they didn’t address many of the inevitable questions that would be immediately be apparent to those in the field.

Of course, many feel that trying to limit the solution space of AI agents into a human-friendly sub-region will inevitable fail, and sooner or later somebody will remove the constraints, so Musk & Co. are merely accelerating our move into a post-human era.

As a long time AI researcher and practitioner, I tend to think of this aspect of the debate as eyes-rollingly speculative. I’m not concerned by that in the least. I’m concerned with how likely it is the research will succeed or fail in creating new and useful ideas, and who will actually benefit from it.

I’m reminded, for example, of other grand attempts such as Interval Research, Paul Allen’s $100 million advanced technology think tank in the ’90s. That was shut down after Paul began to feel that there was “too much R and not enough D” being done in the institute. He took back his remaining money, and spun out the promising ideas into companies that of course Vulcan Ventures had equity stakes in (one of these was Audience, Inc., where I was on the founding technical team). This is but one of several similar examples.

The fact that OpenAI is a non-profit doesn’t mean much to me. They already have $1 billion in pledges, so they won’t need to be raising donations or government grants in the near future (though I could be wrong about this). Non-profits can still patent their work, and despite it being “open” they can still decide who can use the IP and at what price. The board decides this — this would be Elon Musk and Sam Altman, most prominently. Non-profits can be acquired. They can also have profitable side-business and subsidiaries. Much depends on how the official tax-exempt mission is defined and audited.

On the other hand, the idea of an AI think-tank independent from the likes of Google and Facebook is laudable and I would argue is needed. Sort of like the Allen AI Institute, but more purely focused on machine learning and in particular deep learning. The founding technical team is very solid and draws from Google and Facebook researchers; most notably for me, it includes Ilya Sutskever and Andrej Karpathy. As a whole, as Soumith Chintala of Facebook AI Research notes, the team is strong an opinionated, and they are fully committed to actually solving AI instead of doing incremental research, so there can be good things that come out of the lab. (I don’t know anything about their CTO, former Stripe CTO Greg Brockman.)

It will be really interesting to see how they compete for AI research talent with Google and Facebook in particular. To me, this is almost the entire game. They could be quite successful in this, with their long runway of committed funding and with their stated goal of openness and benevolence that is of primary importance to many researchers. This is also the stated goal of Google and Facebook as well, of course, as they are locked in a battle to prove who can be the most open. Will top quality talent become more scarce for these tech titans? Perhaps, but there is a large wave of talent coming out of our universities in the very near future.

The stated mission is nebulous of course. What “benefit” they will focus on? How will it be defined and measured? Are they simply producing un-measurable “benevolent AI” or will they actually be trying to address problems being faced by humanity today? That would be nice. That is something humanity really needs.

Follow me on Twitter | Connect on LinkedIn

--

--

David J Klein

Trading in creativity, deep learning, neuroscience, conservation & music. Creating solutions fueled by intelligent learning & sensing machines.