Hey thanks for replying.
I think the biggest point I am trying to make is that much of what we think of as intelligence, namely motivation and agency is not about computational circuits. So take blockchain, which by the way is a system that can, in fact, have motivation of the sort I am describing because of hard guarantees of a immutability. The reason why that’s true about blockchain is that inherently it is a hybrid system. It is run on a combination of human social and algorithmic consensus. Unlike what others claim, these are inseparable. And humans are what provides immutable motivation to a blockchain system.
Basically, what I am trying to communicate is that a conventional way some people think about AI as circuitry that replicates, modifies or improves upon human intelligence is unsound for reasons completely unrelated to what that circuitry represents. My example of a perfectly human-like Turing-passing AI is designed to illustrate that circuitry of any quality is insufficient to create something to which we can relate as intelligence. What’s required is immutability of sorts, whose measure is necessarily external to any circuitry.