Our deepest fears about the future of AI are rational — and preventable

AI will completely reshape our work and our world

Mind AI
Mind AI
4 min readOct 9, 2018

--

The AI apocalypse will only happen if humans surrender power and responsibility to machines.

By John Doe, chief scientist of Mind AI

When we introduce Mind AI and our vision for the future applications of our reasoning engine, we often get asked two questions:

  1. Is AI going to make humans obsolete?
  2. Are you building Skynet?

Both of these questions are 100% valid. Fear is a healthy and appropriate emotion to feel when talking about the implementation of real artificial general intelligence. AI is going to reshape our world in incredible ways, and if we do not approach development with caution, we could create near apocalyptic scenarios. But we are not powerless to prevent these doomsday scenarios.

AI will not make humans obsolete because humans always adapt.

Technology has revolutionized human work again and again throughout history. With each technological revolution, humans have adapted to working with new tools. Sixty years ago, we could not have imagined that the vast majority of jobs would involve working on a computer — at the time, “computer” was a job description, not a tool.

We expect to see the same thing happen with AI. As AI takes over some jobs, it will enable completely new ones that we haven’t even imagined. Steve Jobs, talking about computers, once said he was making “bicycles for the mind” — meaning a human with a bicycle can cover so much more distance than one with only their feet. That’s how we envision humans collaborating with AI. Not only will we be more productive with existing tasks, but we will be able to take on new challenges that we can’t currently conceive.

It’s human nature to adapt and evolve. Even the most sophisticated AI will not automatically solve all of our problems. We humans will need to harness its power to work to find solutions to global problems like access to education, mental health, clean energy, fresh and healthy food. Given the magnitude of these problems, we welcome the challenge of applying the most sophisticated tools to continue to work at solutions that raise the standard of living for all of humanity.

Skynet will only happen if we let it.

When science fiction writers warn us about an AI apocalypse, they typically present the AI-takeover as coming by surprise, with machines making one big leap from servitude to dominance. But it wouldn’t happen that way. For an AI system to take over the world, humans would have to authorize it to do so, by surrendering our power to it — our power to govern, or to enforce laws, or to wage war, etc.

As sci-fi has taught us, none of these decisions would be wise. But the road to that scenario would pass through many much smaller decisions. AI could gain power bit by bit with each well-intentioned human decision if we don’t carefully examine the consequences.

The most important safeguard against stepping onto this slippery slope is to not let humans off the hook for the consequences of the machines they create. The minute we abdicate responsibility, we lose the ability to learn from our mistakes, and we hand over that little bit of power to the machines.

Whenever we authorize a machine to act autonomously, we must carefully evaluate all possible outcomes of that decision. And we must constantly look back and evaluate the consequences of past decisions, to find out where we have overlooked a negative outcome. I’m encouraged to see that the AI community — and those who hold us accountable — are already discovering that systems like facial recognition software and recidivism prediction algorithms have racial prejudices. Now that we’re aware of the potential to infect AI with the same biases of society, we can work to avoid that in the next machines that we build. But there will be more errors with each level of sophistication we reach, and we must constantly be vigilant to find and correct these errors as they come up.

This is one reason why we at Mind AI break with the mainstream approach to AI. Deep learning algorithms function like a black box. The developers working on them often can’t explain how they work. This becomes a problem when those algorithms begin to make faulty decisions. If you don’t understand how a machine is thinking, it’s almost impossible to correct its errors.

In contrast, Mind AI will be completely transparent, giving users complete access to the reasoning path that led to its decisions. This way, we will be able to learn from its decisions, both good and bad, and stay fully in control over its outcomes.

AI deserves a cautious approach

The third wave of AI will bring about awesome superhuman capabilities. With those capabilities, there will be opportunities for humans to surrender power and authority to machines. The temptation will be strong, but we must think better of it. We must always view AI as a tool and an assistant, and hold everyone involved in it responsible for how it is used.

To stay up-to-date on our progress, sign up for our email list, talk to us on Telegram, or follow us here on Medium.

Read More about Mind AI:

--

--