There are serious deficiencies with deep-learning systems that have come to light — notably with the visual systems in self-driving vehicles.
And it’s not going to be easy to fix them because, whilst the errors … now they have been noticed … are easily spotted and identified, the flaw in the mechanism is theoretically absent — nobody can explain why the failures occur because, in theory at least, they aren’t possible.
The most obvious one is a failure to detect the presence of an object, but they also include classification errors — like determining that a human being is a cat, for instance.
However, the problem is that there is an inverse to de Bono’s EBNE (Excellent But Not Enough) rule … namely NEBE (Not Excellent But Enough) … that guides Consumerism and that means, I suspect, that low-grade, but visually engaging enough to be anthropomorphised, pocket Eliza apps will still fly off the shelves, so to speak.
I’ve lost count of how many SF films/comics had walk-in psychotherpist/holy absolution/suicide booths, but I don’t doubt that dark days lie ahead: imagine you are in a position to reprogram Humanity’s psyche simply by hacking/cracking an app-store and infecting a popular psychotherapy app to respond subtly differently to the way it was intended.