Why has “autonomous” become the adjective so frequently used to describe these systems? It seems unhelpful given that, in many cases, we are striving for human-AI cooperation. Sure, there are some activities we might be comfortable handing over almost fully to AI systems (perhaps self-driving vehicles), but in many instances AI will best serve as a tool for augmenting or improving human decision making and behavioral tendencies (i.e. risk assessment tools used by judges, recommendation engines for job recruiters, etc.). But achieving effective human-machine cooperation is a lot more complicated than it sounds…there are myriad reasons why we humans might ignore, misunderstand, or misuse the insights given to us from AI systems… this is tied to this question of “interpretability” which was raised so frequently yesterday. It’s one of the areas we should be devoting plenty of resources into investigating. When we characterize these systems as “autonomous” it de-emphasizes their interactive nature. It renders invisible the hands who build these systems, as well as the eyes that might be poring over the recommendations that they generate.