‘Gambling’ on scenarios of the future with AI paced to become a really dangerous technology

Sophia Aryan
buZZrobot
Published in
2 min readMar 28, 2019

As of today, let’s admit that AI technology does not present much harm. Though, over time, the situation will change dramatically and the AI community that traditionally supported the values of open-source collaboration and sharing ideas can turn into a more proprietary oriented entity.

I’ve gambled on different scenarios that align with how the AI community may behave with the stages of AI development.

1. Completely open.

2. Licensing models and datasets.

3. Licensing the access to the model (such as access to API).

4. Keeping it a closely guarded secret to prevent others from creating something similar, and collaborating more closely with the government.

5. Giving control to government agencies.

6. Destroying the technology completely.

From the beginning, AI code has remained largely open-sourced to make public collaboration possible. This is still mostly true. As models become more developed, they might be licensed and monetized, as illustrated in scenarios 2 and 3.

As the technology becomes more autonomous and capable, it could be used for more economically valuable tasks, including military purposes, as illustrated in scenario 4. Once the military and governmental bodies get involved, their tendency to prevent others from creating something similar takes precedence over licensing the software.

We can draw parallels from the story of John Aristotle Phillips who designed a nuclear weapon using publicly available books, which was later confiscated by the FBI. Once the technology becomes advanced enough to warrant government attention, we reach the 5th scenario, which is close collaboration with the government to keep the technology secretive and under control.

But if we ever come to the moment that the technology significantly surpasses human intelligence, then two options emerge: scenario 6 — completely destroy the technology to avoid a disaster or return to the 1st scenario — fully open-source and make it available to everyone because super-intelligence inevitably gets out of control anyway.

As an analogy: Humans are the planet’s foremost predators, not because of impressive strength, fast speed, or athletic ability — it’s because of our intelligence. In a world of fast communication, connectivity, and more data availability, super-intelligence will find a way to take advantage of available technologies.

Though we probably won’t be able to envision this scenario since there are a million different unknowns and variables. For now, we should accept the inevitable rapid pace of AI’s development and trust to the inevitable progress and transformation of the representation of the ‘intellect’ in the universe.

--

--

Sophia Aryan
buZZrobot

Former ballerina turned AI writer. Fan of sci-fi, astrophysics. Consciousness is the key. Founder of buZZrobot.com