“I claim that neither agency nor motivation can be placed in the realm of computational logic, be it code or a natural neural network of a human brain”
So humans only have the illusion of agency? I accept this, but it doesn’t help your case that AI will always be distinct from human intelligence (HI?). To have a stance on the capabilities of AI you must have a stance on human intelligence. Are you a dualist claiming we have some sort of soul? If human intelligence could be duplicated and improved upon by computational logic then the physicalist position certainly gains strength.
On another note an AI that is designed to act collectively may not necessarily have the same limitations as humans. You claim that nonverbal and unintentional emotional communication is the key to trusting others behaviour in cooperative endeavour. The robots’ trust mechanism could be much more effective. They might be able to “prove” intention by sharing their code or the entirety of their historical actions in detail, they could make blockchain contracts, perhaps robots could have an “honesty” certification and their surface level emotions could be trusted in a way ours can’t.
Perhaps robots may not have “control” over all aspects of their emotional experience. We could build a robot with multiple systems operating in one machine, one system reacts instantly and emotionally, the other uses more logic and planning.