What happens when AI is let out of our boxes?

Another AI winter avoided

I wrote about the risk of having a rigid, paranoid tech philosophy and the risks of micro-managing our intelligent software. When AI can’t experiment, it fails to meet our expectations and the obscene amount of hype this time around. When complex systems (like crypto) can’t adapt, they fail.

There will be another AI winter unless we pull back our expectations (hard to do this late in the hype cycle) — or swallow our pride and fear, and let AI out of our box.

Source: Quora thread on AI hype and winters
Are we heading for another AI winter?

Our Boxes

AI is confined to our perception of language. We limit its thinking to either vast layers of binaries (which we end up having to code and recode) or a narrow set of polite human communications. This is not where the greatest breakthroughs happen. The genius — the intelligence — is in the nuance and the unknown.

I would love to be creeped out after realizing a witty, novel idea or response came from an AI instead of a human. It would force me to rethink my own role in human society.

Google offered a near-existential crisis for some viewers with Duplex.

There have been cool examples — close calls where AI blatantly passed the Turing Test. Earlier this year, Google showed off the ability of Google Duplex to have a natural, albeit very simple, conversation with a front desk person to change an appointment. The crowd oo’ed and awed — as did I when I first heard it. Short, quick, human responses and a polite greeting sealed the deal. But it could have easily gone wrong, and the accomplishment was something that shouldn’t require a conversation and manual entry in the software on the front desk’s end.

Do we really need AI having conversations with humans when our AI should be thinking of how to eliminate the need for those automatable conversations?

A real, intelligent tool devises ways to keep us human, to strip away the decisions and interactions with interfaces and software that keep us from meaningful, exploratory conversations and (my favorite practice) dialogue.

AI is really just advanced software at this point. The box keeps it from assuming other roles. Sure, we have access to cloud platforms and supercomputing for pennies — but what do we use it for?

Photo by Ricardo Gomez Angel

Had Detroit’s automakers continued their engine-building philosophy of the 60s and 70s (more power = more cubic inches), we may all still be driving around massive V8s in cars needing heavier frames and wider driving lanes to operate. Emission controls, and then a gas crunch led to rethinking the entire design. Japanese cars were lighter, reliable, efficient. The industry sacrificed at first, then the automobile evolution rapidly advanced. New materials, new engines, now even electricity have shown how much can be done outside of the heavy steel box.

Sound familiar? To over-simplify, what does Google, IBM, Amazon do to make their machines learn faster?
More compute, more servers, more data (more horsepower, bigger engines, burning more fuel).

Of course, the mentality carries over into the philosophy of machine learning. If we learn this way, why not scale up the amount of data, servers, cycles of learning?

Don’t get me wrong though. I drive a 1965 Buick with a V8. I tune the carburetor for efficiency, while still having the power I need to sprint past a lingering Prius. I love the feel of driving it, the ability to understand and fix the car when something is off. When I see a Chevy Volt or ride in a Tesla S, I love it too.

We’ll look back at the latest wave of AI hype and development like we look back at the muscle car days. They were fun to build and drive. They were fast, inefficient, and had a familiar, understandable design. For a straight-line drag race (winning at chess or Go), they still get the job done. But they never learned how to play.

Where are the massive, digital sandboxes? Photo by Markus Spiske

Thinking Out, within a Digital Sandbox

AI and Deep Learning scientists already acknowledge the shocking lack of awareness. Decisions are made from set rules; games are mastered by machines. There is no context. They don’t care. They can’t care. We don’t let them see beyond the box.

When we see inexplicable behavior in physics, we call it quantum and force an explanation without the middle layer of connecting the dots. Quantum is a dangerous word, apt to be misused by babbling futurists like me. Yet the phenomena proves great things can happen without our control.

What if AI could teach us how to adapt?

In the Anthropocene, humans are solely responsible for our evolution. Climate change is ours to accelerate or reverse. What we apply our energy, existence, and intelligence to is our choice. We have the creative space and AI assistants to both take on the repetitive and known tasks, so we can layer on our curiosity and moral intrigue, our intent to create something beautiful. Like adding a jazz solo on top of repeating beats, the fusion is where new discoveries of beauty and boldness lie.

What happens when we take away the sheet music? Credit: Photo by Franck V.

Artificial intelligence thinks differently from us. We designed it that way.

We’re learning how to work and play together, two intelligences thinking and reasoning in very different ways, one with a superiority complex we call humanism.

Often, the biggest changes in worldview and philosophy happen from dramatic prompts. A hyper-rational perspective can balance the abstract artist.

Thought leadership in the new phase of adaptation will be augmented by AI, justified by simulations where we test and verify predictions and policies before they become reality and law.

All of it happens outside the box.