Why does so much educational software measure “number of questions answered”?
I’m ashamed to admit it, but we’ve done it before too.
It is a classic ‘quality over quantity’ vanity metric. “Your students have answered 5000 questions this week” sounds impressive… like students are doing some serious learning. But are they?
Judging student outcomes or even engagement by ‘questions answered’ makes as much sense as judging an essay favourably by its large word count.
Like word count, ‘number of questions answered’ is easy for computers to measure. Certainly a lot easier than what really counts — whether we’ve adequately prepared our students for the Real World™.
But if those questions were easy, if they were largely testing basic recall of facts, what have we really achieved?
Here’s what I think matters, and I hope you agree:
Fostering a generation of scientifically literate citizens, able to critically evaluate and analyse complex issues, and able to creatively solve tomorrow’s problems.
We want students to think, and sometimes that’s hard. Sometimes, there isn’t instant gratification. Sometimes, the questions we pose to students might require research, or extended contemplation of more than one side of an issue (shocking, I know).
This depth of thinking can’t, at least today, be measured well by computers. And it certainly can’t be reduced to a metric like “number of questions answered”.