Humans are the worst (micro-managers)

Thoughts on how we control intelligence

Source: Wikipedia

It took me a full 2 months to read Nick Bostrom’s SuperIntelligence. He’s a meticulous geek of dystopia. His scenarios and novella of references (philosophical rabbit holes) tell a plethora of ways an AI of a certain savvy could escape our control — trick us, doom us, or delete us. More than scare the shit out of me, it made me sad.

What aren’t we letting machines experiment with out of fear?

The prompts from Superintelligence gave computer science a whole new meaning to me.

I’ve watched the term “AI” become mainstream as software companies look to ride the next wave of data and automation. The instinct to program and completely control machines carried over from early programming culture. With challenges to our own value, we feel a survivalist need to own the science, the thinking process of machines.

Micromanaging new intelligence will never let it experiment like the kid scientist it is.

Humans are not the end-all for intelligence or capability. Our technology has exceeded us. As with hammers, or horses, or satellites — we create tools and harness intelligent design to do things we cannot do alone.

We don’t pound rocks into gravel with our fists. We enslaved horses to pull our plows because we’re not designed for it, and the work sucks. Space is no place for biological apes. We wanted a god’s eye view of the Earth, so we sent up orbiting machines to send us back pictures and beam down data.

by Blake Richard Verdoorn

We drink coffee harvested from the other side of the world, browse all of our collective human knowledge from a glowing tablet device, and ramp up our expectations. We forget how incredible all this technology is or could be. We worry about future co-existence with new intelligence, like a generation of retirees shaking their heads at “kids these days.”


In Gods & Robots: Myths, Machines, and Ancient Dreams of Technology, Adrienne Mayor writes of the origins of artificial and supernatural life. Religions, royalty, rulers all used them in different ways to reinforce power. In fear, we abstain from using tools designed to be owned. The machines serve their interests of preservation and expansion.

I, perhaps naively, see only two main options for how we build and operate AI:

  1. Use our brain structure (neural nets — what an ego stroke), our crudest methods of problem solving, and make it speak in our language.
  2. Force a binary choice (Si o no, 0 or 1) for everything else and speak only in machine language. More choices and things to test = more layers of binaries. If automation systems run on binary instructions, the response logic should too.

Yes, there are hybrid techniques and augmented collaborations between human and machine, but the roles are strangely siloed and full of friction. We keep throwing computational horsepower at the same divisions of labor. It’s like over-funding the same school system, the same lessons for a student. We give them better calculators, more time and more homework, and expect radically different outcomes.

The deficit isn’t in lack of resources. It’s a debt of philosophy. It’s a “human purpose scarcity” mindset.

Like overbearing parents, we deprive our AI children the discovery of learning. We fear they are “too smart for their own good.”

There’s also an odd pride in what siloed AI’s can do, like a parent who manages and brags about their chess-playing genius of a daughter.

Dress her pretty. Make sure she keeps practicing every day. Control her meals, her “friends,” her life completely. Don’t dare let her out into the world — except for select chess competitions.

Child prodigies are not destined to be adult geniuses. If we strip away an AI’s need to explore and create, why would their solutions be anything less than sterile and rigid?

Photo by Hin Bong Yeung

Creative minds must be fed

I was forced to take standardized tests by a fearful education system in the early 1990’s (what if they’re not learning what we think they are?!). A new snacktime appeared during the week of testing. In between tests, we ate carrot sticks or celery with peanut butter. The extra energy fuel kept us going through the “problem-solving” and “comprehension” portions.

There’s a comparison here to how machines perform and learn. They don't need snack breaks, but more than steady electricity, they need quality data. They need it fresh. They need to be paused and fed new models, concepts, and goals.

But what if they’ve been playing nice just to deceive us, to turn against us?

When our goals and the goals of the AI are aligned (i.e. it’s proven we understand each other) trust happens.

I’ve heard the AI supremacy race compared to the nuclear arms race. What?

Trust history: AI is not M.A.D. There will be advantages for initial ownership of platforms, sure. Shared, cross-border, even universal platforms will become the true dominators.

Isn’t that dangerous, these artificial prodigy children running around without supervision?

No more than rich kids with diplomatic access. So, yeah, no more than rich kids.

The actual problem: Garbage In, Garbage Out

Machine learning suffers from the “garbage in, garbage out” problem in what and how data is used. Data refineries are critical for training and the organization of humans behind the tool. The refinery plays a profound role in setting intention early on. We decide what data starts the learning, what outcomes we need. Like giving a kid a particular book, the data we choose seeds ideas and sparks curiosity and exploration.

Any great inventor will reference a story or a moment, reading or listening, when they started thinking differently about the world and its possibilities.

We’ve loosened up some with Reinforcement Learning. You tell the AI to accomplish a goal — and then you let it play. The AI diligently repeats, tries new actions, until it wins. It works in games because the rules are set. When some major variable changes, the AI stumbles and gives up in the way trapped animals do: pacing back and forth in a corner of its box.

The American education system still trains kids to become factory workers and corporate employees with set tasks and set roles. As a late-Millenial, my generation’s cliche angst has been defined by our unfulfilled expectations. We were told if we got good grades and followed the rules, we would definitely be successful in the real world. But the world is complex and undefined. The rules don’t apply. We had to teach ourselves how to adapt.

If you scroll through that deep dive into Reinforcement Learning failures by Alex Irpan, you’ll see conclusions like how self-play where an AI plays against itself — “both players controlled by the same agent” — creates faster learning and mastery.

To see a kid playing by themselves seems lonely or unhealthy. To see an AI playing with data by itself seems dangerous or in need of control.

AI suffers severe micro-management and over-parenting. The capabilities still have a long way to go.

Let the kids play.

Chess and GO are good, but let them explore outside.

Like toddlers in an obstacle course. I find the running and jumping attempts after the 1:30 mark way too funny.

I’ve simplified and depreciated (and likely misunderstood) current efforts in AI development to make a philosophical point.

I’m not qualified to criticize technical approaches, or even know/understand the full spectrum of research and function being developed by scientists all across the earth. That said, here are major observations that bug me and wrap up the argument to let AI out of our box:

  • Neural Nets are an ego trip. Human brain architecture is the best way to think? Really? Meditation shows us how crazy our brains are, constantly distracted, self-defeating, playing back the same negative loops and fears. Why replicate crazy? On that note, AI architects should have incredible track records as parents.
  • Machine logic theory is (still, currently) incompatible with human emotion and ethics. We can do better. It’s starting to happen — an inclusive conversation and input to guide decisions and experiences generated by AI — but it has a long way to go.
  • We contain AI out of fear of, not hope for, the future. If it really can do so much, why not let it loose in digital sandboxes on all kinds of massive human problems we haven’t been able to figure out?
  • AI’s are like children, currently at toddler stage. Albeit there are some idiot-savant toddler who can beat anyone in chess. Still, overprotective parents raise weird adults and do more harm than good in their worry.
  • Powerful things are already happening without understanding how they happen. Isn’t that the point? To let AI make bigger magic as long as the outcomes and intentions are clear?
  • If the new solution works better and helps us, why micro-manage the process? Our role lies in dialogue, setting intention with compassion and humility.
Photo by Franck V.

What did we (I) miss?

Maybe these things are already in the works. We’ve built AI to help us manage the firehose of data coming at us now. The hype, fear, and hope has created its own heavy rush of analysis and predictions — making it hard to sift through what is actually happening.

Humanoid robots are compelling, yet it’s hard not to see through the janky, smoky depictions of agency and intelligent interactions. Robots are not A.I. They can become intelligent shells and vehicles for AI, like our own bodies are for our brains.

Unless the wide-open, simulate-any-problem, AI playgrounds are accessible by a substantial, non-coder swath of the population, we’re not applying the potential.

Like firestarters wielding immense power in a tribe of cavepeople, the real progress happens when they share the flint, the spark — and we start cooking. We won't burn down the forest if we share our experiments and gather ‘round to tell stories by the campfire.

We’re already figuring out how to feed everyone. We pointed our intention, our scientists, our machines in that noble direction. Soon, we could realize how to give everyone a full, delicious life as the incremental gains from diverse AI are shared. We can point our kids in that direction, too.

The clear, simple logic of children often brings us to clarity. It happens when we don't tell them what to say, what to repeat back to us. We are augmented by the less-managed power of AI. We need someone outside the fray to ask:

“Why does it have to work that way?”

“But, Why?”