The new generation of AI programs, those based on machine learning, particularly deep learning, are unauditable. They produce rules based on input and annotated data, and the rules are not human-readable. This is a huge problem, because this approach is very seductive. They beat any other approach on public datasets. So soon we will have seemingly very high performing software that no one can tell will do faced with unforeseen circumstances. A recipe for disaster?