WTF is Artificial Intelligence?
Sam DeBrule
1.2K14

Wonderful series. No doubt the changes we will be seeing within automation will bring with it serious challenges.

But there are many truly fantastic opportunities, such as helping us learn about the natural world, and ourselves, faster.

We can truly understand biosystems, from smaller level, like cells or my garden, to the macro-level, like the entire taiga, and so on. Over time, we will learn more about the dynamics between the systems too.

This is bound to create “job opportunities”. I use “quotes” because deep learning will not only enable corporations to automatically choose the right word, image or chatbot phrase to increase conversions — it can be used to create meaningful work that helps us sustain and enhance our whole natural environment, and our societies.

If robots are producing food and stuff that we need/want, at a lower price (we tend to cost a lot!), we can use those gains to spend our time to both learn about and take care of of our environment. Whether we are building drones that perform tasks, or if we ourselves perform manual labor to achieve the tasks is not really important — but it is a huge potential field to create meaningful work, probably for many.

Intelligent and dynamic caretaking, or simply doing what the machines either cannot, or are not ready to, do, in order for us to use our resources intelligently/sustainably. This is positive, not negative.

Of course, this is assuming that we can overcome the challenges of wealth/income distribution, in a world where a few own and control not just physical resources, but also now have the technology to simply use less human labour.

Basic income is probably already a necessity to consider, in view of the probable direction for the coming years.

Also, we should address the issue of who owns and controls “deep learning stacks”. Google springs to mind… if only Google is used to “find everything” (regardless of context or device), we will ultimately end up with only one source of “knowledge”. Not good.

Legally required transparency, and perhaps even fees, for creating deep learning projects/programs, are one way to ensure that ML/AI commercial and knowledge benefits are channeled back to us, the humans who created the technology and who will be consuming the automated output.

So — control is important. I sure hope some politicians are doing some late night reading too… They really need to understand this very fast. Otherwise, we do face a risk of creating a world with +90% “left behinds”, and that is really not good. For anyone.

A bold vision, embracing the opportunities but facing the challenges, is what we need. And why let companies run the whole show? Could we not, as a society (yes, government), decide to invest massively, precisely for the amazing opportunities that we can get using deep learning?

I, for one, would like to see open access to neural networks that link, say trees, humans, or anything in the “natural domain”. This data, and the learning capabilities, should not belong to anyone but us, i.e. be public domain. We could work actively toward that goal instead of worrying about companies, who are already scrambling to build networks in every conceivable domain. It just takes insight and will.

Again, thanks for a great series! Amazing posts :)

Like what you read? Give Geza Csikasz a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.