Not that I’m disagreeing, but…
It strikes me that you are engaged in somewhat of a paradoxical argument. You are decrying the “experts” overstepping their bounds with technical jargon that the audiences do not understand but “seem” right, but you are making a subtle and quite technical argument. It struck me that you are leaving out a few steps, though: I wonder if an example that I had been thinking for a while seems appropriate for illustrating how “experts” can mislead (in two separate steps).
Google Translate, in some sense, is simultaneously an example of how “non-expertise” can beat experts and, at the same time, how “experts” can mislead. In the first instance, it wound up outperforming the linguistic experts who wanted to systematically develop a system for translation from first principles because the developers recognized that languages are too complicated, with too many exceptions and “irrationalities” built in, to be reduced to a system of abstract principles that are universally applicable. They did recognize that, though, that in most instances, there are enough identifiable patterns in translations that can be recognized and repeated by feeding large amounts of data. So a simpleminded approach to big data beat out complex theory, in this sense. On the other hand, Google Translate does not do so well when the documents in question are subtle and imprecise — say, Pushkin’s poetry, which, to be fair, will be difficult even for the humans who know languages but lack poetic flair, to translate easily. So even when Google Translate does well, say, for 95% of the universe of possible translations (I am probably giving them way too much credit), it will perform miserably for the remaining 5% for good reasons (inherent complexity and uncertainty in language for that subset) and that it does so well for the 95% is no reason to trust its performance.
It struck me that the problem that you are describing is predominantly the latter, with elements of the former mixed in. People who do not know about translation in general, let alone machine translation, don’t know how Google Translate does things — only that it translates well for the “95%,” much the way we “know” “experts” are generally “smart” even if we don’t know what exactly they do. Since all we know about Pushkin and UN documents is that they are all in Russian and we don’t know Russian, we expect that something that can (reportedly) translate Russian can translate the former as well as the latter — and, since we don’t know either Russian or how translation works, we can’t evaluate how good the translations are other than by the superficial appearances. On the other hand, of course, the solution to this problem would not necessarily be to develop a theoretical solution based on linguistics, but to sign up a bilingual poet who knows both Russian and English and, more importantly, has an actual poetic flair — which may not be so easily reduced to logical moving parts. This, in turn, requires accepting that we still need to leave room for the “irrational” because some uncertainties are, at least for now, fundamental and irreducible(e.g. poetic language).
The problem seems twofold to me: as per the linguists vs. Google Translate, a neat, “rational,” theoretical solution does not exist for many problems and a robust solution based on pattern recognition (i.e. heuristics) can easily outperform overtheorization that might “seem” smart. On the other hand, heuristics have limits and they need to be recognized — i.e. some uncertainties just exist and can’t be wiped away. I always thought that this was the inherent theme behind most of your work: uncertainties inherently exist and we delude ourselves into believing that they can be conquered by technology, only to have them crop up and surprise us unexpectedly.
Anyhow, big fan of your work generally and a big believer in propagading greater appreciation of uncertainties in all aspects of life, coupled with reasonable response thereto. Grateful that you are blogging your thoughts through the Medium.