Member-only story
Humans are the worst (micro-managers)
Thoughts on how we control intelligence
It took me a full 2 months to read Nick Bostrom’s SuperIntelligence. He’s a meticulous geek of dystopia. His scenarios and novella of references (philosophical rabbit holes) tell a plethora of ways an AI of a certain savvy could escape our control — trick us, doom us, or delete us. More than scare the shit out of me, it made me sad.
What aren’t we letting machines experiment? Is it technical or out of fear?
The prompts from Superintelligence gave computer science a whole new meaning to me.
I’ve watched the term “AI” become mainstream as software companies look to ride the next wave of data and automation. The instinct to program and completely control machines carried over from early programming culture. With challenges to our own value, we feel a survivalist need to own the science, the thinking process of machines.
Micromanaging new intelligence will never let it experiment like the kid scientist it is.
Humans are not the end-all for intelligence or capability. Our technology has exceeded us. As with hammers, or horses, or satellites — we create tools and harness intelligent…