This story is unavailable.

“Also that IQ tests and tests like the SAT enjoy significant predictive validity.”

Since I started to learn machine learning, I’ve come to appreciate how a low bar that really is. Making a test that has “predictive validity” for any of the usual touted outcomes, given a reasonable amount of data, is trivial. There’s this folk wisdom in IQ fan circles that it’s impossible to make a better predictive model than IQ — that is utterly wrong. Add in economic status. Add in PE grades. Add in the multitude of tests and tasks the IQ fans have thrown out for not being “g-loaded” enough (which is the same as not providing support for their model). Even without that, you can certainly train a better model if you don’t restrict yourself to early 20th century methods.

But there’s also unsupervised learning, where you try to learn a model which represents the variation in the data well. Although this will rarely be optimal for any specific predictive task, the hope is that it will provide a “good enough” model for lots of different tasks, and that it can give a good starting point for exploration of the variation. IQ doesn’t do well there either, as you’d expect from a one factor model.

The one thing IQ has going for it is lots of historical data. But even that should be treated with caution — it’s dubious to say that what Terman’s tests measured is the same as what today’s tests measure.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.