Maybe we need to talk about the UI of AI? The successful interaction between a human and a machine depends on the human knowing about the machine, its functions, degrees of freedom and limitations.
With just data being shoveled around in intransparent ways and sometimes even programmers not understanding why their code produces a certain result, it is getting next to impossible for those using, interacting or dealing with AI systems to do so effectively. The worst case of course is where there is no UI to allow for any interaction or monitoring at all.
Ask Boeing or Airbus and cockpit suppliers like Honeywell how long it took to get communication and interaction between the machine and humans to where it is now. And yet it is still possible that flaws in the design of interaction between humans and the machine or within each of, or among them, can lead to deadly incidents.
So maybe we just toss away the term “AI” and scale back to talk about humans and machines and how to give both transparent understanding of each other wherever they interact?
And how about tossing the marketing hype and “impossible is nothing” sales mentality of those doing AI and Machine Learning for something more modest, humble and most of all intellectually accessible to those making purchasing decisions and those using and interacting with machines and their output (“behavior”) every day?
Makers and sellers of AI must posess the social competence to fully understand the complex environments and interactions their limited codes will be subject to, and to cater and care for them in responsible ways. Let’s call it a display of natural intelligence of both emotional and rational character.
For AI may not fail this time because of the limited skills in writing code. It may simply fail on the way more complex level of interaction between humans and humans making machines human friendly (other than by trying to fool their brains).