Anything wanting to pass for a human would need to fail accordingly. A machine that does not make the same mistakes that people do will fail a test for humanness, and a machine that does will either be dumbing itself down, or equally as intelligent as a person.
Well, this is slightly illogical, since the human IS the judge. If humans are consistently making those mistakes — it’s only because they are failing to identify them as mistakes. So they won’t be able to identify that a machine is making (or not making) those mistakes. It’s like saying that colourblind person will only see black and white pictures as “real”.