While Turry may not be the best example (given that it’s mostly an ANI), the rest of the threat remains if we were dealing with an AGI. We can agree that ANI is not a threat in this case.
It’s when we reach AGI (then possibly soon after ASI) that things could get scary. Where you argue that “just more computing power” isn’t enough, you’re missing the point that it’s not just computing power, but computing power that learns from itself.
“ Going further in intelligence, the spider would probably learn to value the cooperation with humans and everything that humans can offer him, understanding that working together is much more effective than merely eating each other.”
I like this point, and you’re right that typically, working together DOES make more sense, but only so far. The ant I stepped on earlier had nothing collaborative to offer. Also, I wouldn’t be willing to bet all of humanity on “would probably learn to value”
Even then, this still doesn’t address three real possibilities:
- Super-intelligent doesn’t mean it’s free from making poor decisions. As it keeps learning from itself, even at higher levels, it will still be learning through trial and error (it’s what works!), except that the errors could be greatly magnified (care of the Internet, enormous computing power, autonomous devices). Perhaps on its way to finally decide that humans are ok, it only did so after accidentally wiping us out? We killed the dodo bird in a similar “oops” manner, right?
- If I were an ASI and looking at the state of the world, I would consider humans the biggest threat to the planet (as we have been and continue to be). It could easily wipe us out and mother nature, birds, bees, flowers and wolves would keep living on a-ok.
- We’d have a careful balancing act to manage — convincing that we’re not the biggest threat to ASI, when in fact, we definitely would be the biggest and only threat to ASI.