Anything wanting to pass for a human would need to fail accordingly. A machine that does not make the same mistakes that people do will fail a test for humanness, and a machine that does will either be dumbing itself down, or equally as intelligent as a person.

Anton Loss
Jul 29, 2017 · 1 min read

Well, this is slightly illogical, since the human IS the judge. If humans are consistently making those mistakes — it’s only because they are failing to identify them as mistakes. So they won’t be able to identify that a machine is making (or not making) those mistakes. It’s like saying that colourblind person will only see black and white pictures as “real”.

    Anton Loss

    Written by

    startup, london, data

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade