Considerateness in OpenAI LP Debate

Yesterday, OpenAI announced OpenAI LP: “a new “capped-profit” company that allows [them] to rapidly increase [their] investments in compute and talent while including checks and balances to actualize [their] mission.”

Here is how the ML community reacted:

Multiple things happening here:

1) Open AI LP is interpreted as “bait-and-switch”/“screw non-profit”
2) Claiming to build Artificial General Intelligence (AGI) is mocked

I won’t be defending OpenAI LP in this post. Instead, I will point to several resources and everyone is invited to form his/her own opinion.

Against OpenAI’s mission:

slide 130 from Yann LeCun’s deck

Neutral to OpenAI’s mission:

  • US Congressional Hearing on AGI (June 2018):
Dr. Tim Persons (GAO) at 21:31; Mr. Greg Brockman (OpenAI) at 26:53; Dr. Fei-Fei Li (AI4ALL) at 31:28.

Pro OpenAI’s mission

  • Three Comment Threads on HN (look for comments from Greg Brockman (gdb) and Ilya Sutskever (ilyasut)).
  • Literature Review(2018) on safe Artificial General Intelligence
  • Comments from Miles Brundage (Research Scientist in Policy at OpenAI):
  • Greg Brockman’s talk at the Web Summit (November 2018):

Now that the above resources are common knowledge, I’ll present what worries me about the OpenAI LP controversy and how we can make progress on it as a community.

Considerateness

“One is being considerate if the way one treats others, in personal interactions and communications, is notably guided by how they would like to be treated. […] Considerate behaviors include: friendliness, honesty, intellectual honesty, cooperativeness, respectfulness, modesty, integrity, reliability, rule-following.” — Considering Considerateness

Open AI LP is a big deal. And I’m not refering to the change this represents internally at OpenAI.

No. I’m talking about what it implies for the future of AI, especially for research aiming at building Artificial General Intelligence.

One more time: an AI lab that appears to be one of the closest to building AGI is creating “a new “capped-profit” company that allows [them] to rapidly increase [their] investments in compute and talent”.

Yet, here is the second most upvoted comment on r/MachineLearning, which is supposed to be a “diverse community in which members feel safe and have a voice” (cf. r/MachineLearning’s rules):

The Bottom Line

There are plenty of experts prepared to line up to express strong views in favour of OpenAI here, or against it.

Ultimately, it’s reasonable for many people to take strong disagreeing views. And it’s their prerogative to share those opinions, and reasons for them.

But what’s not right is for people to sacrifice civility.

In order to deal well with any emerging technology, such as AI, we need to be able to come together and share access to our reasoning, so we can figure out what is true about each of our assessments.

Whether you think AI will be more-likely good or harmful, undermining these rules of civil discourse is dangerous.

Norm Enforcement

It doesn’t matter if OpenAI is overestimating the probability of near-term AGI or not: they’re hastening the development of AGI Safety, that will ameliorate the effects of potentially harmful technologies (such as AGI).

As a community, we should enforce norms that favor intellectual honesty. The MachineLearning subreddit has rules, the problem is that they are not enforced.

Mockery

More generally, we should be wary of mockery. For reference, those were the expressions used by François Chollet on Twitter:

“build AGI” — the mystical AGI of tech legends.[…]
“we’ll save humanity from this oh-so-dangerous ML stuff”

This tweet is snarky and counter-productive. But that’s not the worst: the ML ecosystem tends to gravitate around AGI skeptics, such as Yann LeCun (127k followers), François Chollet (135k followers) or Andrew Ng (358k followers), who are openly dismissive of AGI safety.

Conclusions

If we want to grow as a community, we should:

  1. Form our own opinions. This imply reading from different sources, including the literature on AGI Safety.
  2. Look at the facts. For instance, OpenAI’s study on AI and Compute.
  3. Enforce norms that promote intellectual honesty and punish mockery.