The difference between measuring progress and attainment
A worked example to show why expected progress doesn’t work in practice
This week I posted a number of progress trackers on twitter to show the nonsense of expected progress and flight paths
This method of progress tracking has been in place for a number of years and has become ingrained within schools. However, it fundamentally doesn’t work for many subjects.
Let’s imagine a student studies Energy in the first term of year 9 as part of their GCSE Physics. They sit the end-of-topic test and achieve 72%. What does this tell us? Without any more information than I have presented, absolutely nothing. We don’t know how other students have performed or how difficult the test was. If I have the results of other students, I might be able to make a comparison.
Now I can see that their attainment is about average, but what this doesn’t tell me is a grade. It is impossible to say what GCSE grade this result corresponds to. I don’t mean that it’s very hard to do, I mean it is impossible. A GCSE grade is designed as a measure of the whole course, not for individual topics. No one can say what a 7 looks like in GCSE Physics – Waves. As a concept, this does not exist.
As we move into the Spring term, the student has moved on to study the Particle Model of Matter. They sit the end of topic test and achieve 60%. The first assumption students, parents, and some teachers make is that they have done worse in this second test which is incorrect. Without knowing how other students have done, and how difficult the test was, we can not know. If we do now compare this second result to the first, they have done slightly worse in this topic compared to their peers. (This is incredibly marginal and is probably due to the measurement error of the test. In reality, I don’t consider there to be a difference at all) Does this mean they have made progress? Of course not! They have studied a whole new topic and learned new things.
Let’s now assume that we could apply grades to these tests. Why on Earth would we expect the student to achieve a higher grade on the second topic than the first? If I taught them in the reverse order, would I expect them to now achieve a higher grade on Energy?
The idea that we should/could measure science performance using GCSE grades and expect it to increase is entirely made up. The only way this could work would be to ask students to sit a whole suite of papers each term (including unstudied topics) and calculate the GCSE grade (as a new qualification this is currently impossible but could have been possible with the old GCSE). This sounds like a complete waste of students’ time. It would be like a driving instructor asking students to do a full driving test each lesson, before teaching them any manoeuvres. I think everyone can agree this would be very poor practice.
Alternatively, we could say that by studying the Energy topic we could say they have completed one 1/8th of the GCSE. So if they sat the GCSE at that point they could access 1/8th of the marks so then they could achieve a grade 1ish. As they study more, they have access to more marks. The issue with this argument is that each topic is not a set proportion of the GCSE, some topics are conceptually more difficult than others and I am not aware of any research into topic performance vs GCSE grades. I don’t believe these have been considered in the development of these flight paths, however, I’m happy to be corrected.
So what can we do instead? Rather than worry about measuring progress, just deal with attainment. The two tests the student has completed tell you have they achieved in comparison to their peers. Are there any topics they performed poorly on relative to their peers and their performance on other topics? Don’t try to compare percentages, it’s very difficult to predetermine the difficulty of tests (hence why grade boundaries are set after the tests are sat) Look for big changes; the topic tests probably aren’t reliable to make high stakes judgements but they may help you identify areas worth discussing with the students.
If you sit longer exams, in standardised conditions, at the end of the year as we do. Then you may even be able to use this process to compare across subjects. Again, big changes are the key. Is there anything worth discussing with the student? Are there any trends across subjects that individual teachers couldn’t spot? There’s no attempt here to measuring progress, it can’t be done.