The Companies Creating Superintelligence Have Just Flipped on Their Charters

bitgrit
bitgrit Data Science Publication
3 min readMar 17, 2019

The idea of a superintelligence originated in science fiction novels, and has now become a serious proposition with entities like FAIR, OpenAI, and Google DeepMind working to create an Artificial General Intelligence.

What is superintelligence?

In one line: It is AI improving itself and then the improved AI improving itself until a point of singularity is reached, quickly surpassing the limits of human intelligence, at which time we can no longer make accurate predictions on what will happen next.

So what’s the deal with OpenAI and DeepMind?

Taking a look at DeepMind’s stated ethical principles, they mention transparency, openness, collaboration, and inclusion among the core principles they stand on.

We believe a technology that has the potential to impact all of society must be shaped by and accountable to all of society. We are therefore committed to supporting a range of public and academic dialogues about AI. By establishing ongoing collaboration between our researchers and the people affected by these new technologies, we seek to ensure that AI works for the benefit of all.

However, DeepMind has never actually revealed who is on their ethics board, though The Economist reported that all three DeepMind co-founders are on it. Rather than a transparent, open, and inclusive board, it’s secretive and made of DeepMind members.

Taking a look at OpenAI’s ethics charter, we read:

OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles…

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

However, their recent moves in re-structuring as a 100x cap for-profit company, as well as withholding an AI algorithm they generated instead of open-sourcing it, run in direct opposition to not only their name — OpenAI — but also in avoiding the undue concentration of power, as OpenAI will be the one in possession of their newfound power.

By taking in funding, OpenAI becomes responsible to its investors, who will now have a greater say in the future of the business.

Why You Should Care

Ultimately, OpenAI (which began with the idea that the impact of AGI would be so grand that it must not be privatized) has turned on its head, and made AGI a selling point for potential investors. DeepMind has failed to adhere to its ethical charter as well. Both of these entities are building on the most powerful technologies we’ve created.

What Can You Do?

There are two ways to go about contribution to AI: As part of an open community working on AI problem statements, or as part of siloed teams.

The bitgrit team is joined together by the unified goal of democratizing AI — we connect data scientists directly to industry problem statements that we share publicly. If you want to be a part of this, join our open forum.

This article was written by Frederik Bussler, CEO at bitgrit. Join our data scientist community or our Telegram for insights and opportunities in data science.

--

--

bitgrit
bitgrit Data Science Publication

We’re democratizing AI with our online competition platform — bitgrit.ai.