the learnings from bulk text feeds are as close as we can realistically get to how the world actually is.
The most dangerous AI
Mike Hearn
21513

This statement reflects a fundamental flaw of the understanding of the article towards the truth, the world, and how to position AI in it. The bulk text current AI systems are trained on exhibits severe participation bias, which is to say, ignores “the silent majority”, who may not have access to internet at all, and may not use the language which is processed at all.

It is also useful to distinguish the statements which are not in dispute from the ones which are, within the domain of each statement, or to derive a spectrum of how hot is it disputed, so as to provide insight to what should be further studied and what can be based upon. Domain restrictions on the universal opinion set can be seen as ignoring even vocal majority, but it is nonetheless a crucial human ability and thus a direction AI research could thrive towards. The “Bush is a war criminal” example is of legal domain, and for a newspaper such opinions are sourced from law practitioners only. If we entrust an AI to write news articles and such a statement is predicted, it would be within journalist domain incorrect.

What’s common in my previous two paragraphs is to point out the bulk text sample on which AI are trained, although a lot, is after all a sample. As statisticians have long known, a sample is prone to all kinds of biases, and the analyzers have numerous tools to help them reduce the effects of the biases, lest their models make wrong predictions. Current AI technologies are statistical models at their hearts. So far we are astonished by the accuracy of their predictions, but we cannot simply take what an AI predicts to be the truth. AI may be a flat mirror, but no flat mirror can reflect a whole body at once, but only part of a surface of it, depending on lighting and position.

The risk of AI being rejected as a liberal propaganda machine is noticed in the article, but the opposite risk, namely AI being rejected as a conservative machine, is also present. The only way for AI to be neutral is to make it refrain from expressing on controversial topics, which requires AI to learn what are controversial. This is different from current products, which are customized to appease each user. It is not trust among public that makes tech products popular, it’s their lack of opinion. What is described in the article is not trust. Since there’s no stand to begin with, they are not neutral. they just show a different stand to each user. AI, expected to give an answer, is compared to the current systems which don’t give an answer. People are not happy facing a mirror — they are happy facing appeasement. The article confuses itself at this point: does it want an AI that’s a mirror, or does it want an AI that is popular?

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.