Student Assessment

Alice Germain
Dr. Alice G. on Education
10 min readFeb 7, 2020

Assessment is so central to education that it would seem impossible to imagine a school without any form of assessment. Here, we need to make the distinction between formative and summative assessments since their purposes are completely different, as can their effect on students’ motivation be as well. While summative assessment is an evaluation of students’ learning at the end of a teaching unit against some standard, formative assessment serves the purpose of promoting pupils’ learning. Tests and exams belong to the first category, whereas the second one can take many different forms and are usually embedded in the daily lesson routine.

Formative assessment seems rather natural to a teacher who is focused on pedagogy and their students’ learning. The difficulty lies however in assessing all students’ understanding collectively. For this purpose, I found mini-whiteboards very handy in maths lessons once the students would accept that they had to write down their working and not only the final numerical answer. In that way, I could easily identify the common conceptual mistakes, and respond to them. And I often observed that students were engaged when we used mini-whiteboards, and also attentive to my explanations after not only I but also they realised that many of them would make the same type of mistake. This a great advantage since students are otherwise very embarrassed when they make a mistake in front of the class, as they don’t understand the great learning value of analysing why a certain mistake was made, and not only for the one who made the mistake but also for the rest of the class. This is one of the core ideas of the concept of growth mind-set coined by C. Dweck[1], which states that intelligence is not fixed but can be developed through effort, and I am convinced that it is an essential message for students both in terms of motivation and of development of their potential. However, its corollary that mistakes are part of the learning process (effort means dealing with hard tasks or topics, so there are necessarily mistakes at first) is in stark opposition to the sense of competition that prevails in our societies and is instilled from an early age by parents and teachers, be it deliberately or not. It is interesting to note that while making mistakes in maths appears to many to be a sign of lack of intelligence, nobody would expect a child who starts learning an instrument that they would play it without making any mistakes. As for competition, it is often used as a way to motivate children since many of them will desire to win the competition, which could incite them to make more effort (but does it really work? see Student Engagement).

Schools are evaluated on the basis of their students’ GCSE results. This leads to an obsession about GCSEs by the whole school management, and, in turn, most of the teachers. The very aim of teaching becomes the GCSE, which is also the only reason why students should be listening and focusing during lessons. In this simplistic logic, the best way to motivate students is to constantly remind them about their test, as I was advised by my mentor to do. In this exam-centred system, teachers end up teaching to the test, which is very narrow-minded and short-sighted teaching. If you happen to listen to some teachers or read what they write on forums or blogs in the internet, you will find out that many of them suffer from being obliged to teach in that way. This is not what they have signed up for, and this certainly contributes to the high teacher resignation rate.

During my PGCE, I had an interview in a sixth-form college. The college asked me to teach year 12 students wave interference. The topic I was supposed to teach was advanced examples of wave interference and implied that students had already well understood the concepts of waves and wave interference. Since I did not know the students, I decided to check at the start of the lesson that the students had clear ideas about waves and how two waves add up (or interfere). I quickly realised that three quarters of the students had not mastered the basics of waves, therefore I spent some time on teaching them the basics. Obviously, I could not cover the topic I was supposed to teach. During the feedback, the physics teacher who had observed me said that I should not have wasted time teaching them the basics but should have gone straight to the topic I had been asked to teach. Talking about this experience with my university tutor, he told me that one shouldn’t take the interview lesson feedbacks seriously as interviewers use any excuse to justify their choice not to hire a candidate — and, indeed, they could have criticised me for not having taught the students the fundamentals, had I made this choice, although it was obvious that they had not mastered them. However, I was shocked that they could suggest, without batting an eyelid, that when you teach a sixth form you have to teach your students the answer to a certain type of questions, even if they don’t understand the basics of the topic. I must confess that this interview was the last straw that made me feel a mixture of hopelessness and indignation, which has prevented me until today from further applying in schools.

If too many tests are detrimental to the quality of teaching, the way to assess some students’ skills is also problematic. As soon as they are away from testing students’ pure knowledge, tests are condemned to be flawed. Given the importance of tests in our education system, I believe that this is something that should be discussed in more detail. Take for instance the 11+ exam, which is an entry test to independent or grammar schools. They are supposed to test the level of ability of children, and many grammar schools claim that their tests are ‘tutor proof’.[2] Judging by the thriving business of private tutors and schools who exclusively teach their students to the 11+ exam, I seriously doubt it. Students who are privately tutored or go to these special private out-of-school schools learn how to deal with the few types of questions used in the 11+ tests and do several tens of past papers on the computer to be on the exam day extremely fast and not disorientated by the use of the computer. It is not their academic ability nor their potential that are assessed with this type of test, but rather their ability to memorise how to solve certain types of problems, their speed, and above all the financial and logistic capacity of their parents to organise these ‘extra-scholar’ activities. Testing the ability or level of understanding is certainly extremely difficult as ideally one should also test how students deal with new situations or problems, but obviously the possibilities for new types of problems within a certain topic are not infinite. Some students therefore develop a learning strategy that consists of memorising how to solve a problem of a certain type, given that the number of types of question is limited. I have observed maths teachers explicitly fostering this type of learning strategy. For instance, they teach their students to recognise the type of questions and then the procedure to solve each type of question rather than have them really understand the concept in itself. I suspect this is not because they do not want their students to understand but because it is seen as a more efficient way to obtain better results in maths exams in a shorter time. This students’ memorisation strategy also hampers some forms of assessment of understanding that consist in asking students to explain a phenomenon — students who are marvellous at learning by heart an explanation will do very well on these questions, regardless of their level of understanding. And this certainly happens in science exams where students are asked more and more to explain concepts rather than to use them to solve a problem.

The relatively recent shift from a science education mainly focused on scientific concepts to a much broader education encompassing the “knowledge and skills that enable citizens to deal with issues and decisions where science has a role” (this is the second vision of scientific literacy as defined by D. A. Roberts[3], see also The Aims of Science Education and the Science Curriculum) has proven to be particularly complex when it comes to assessing scientific literacy skills. Attempts were made, with varying degrees of success, by asking students to express and argue their opinion in relation to a given situation drawn from the real world. Another area of this vision of scientific literacy is the nature of science. Assessing students’ understanding of the processes of science seems to have consisted so far in asking them to explain some cases of scientific controversies. Interestingly, it has been observed that “the richer the conception of scientific literacy has become, the more uncertain have educators been to embrace the task of its assessment”.[4]

Finally, as a large part of science is experimental in nature (although I would challenge the idea that science is only practical), it appears reasonable that experimental skills be assessed. Difficulties arise however when it comes to assess practical lab work of hundreds of students at the same time, as shown by the successive changes in the practical skills assessment system in recent years. It was for instance the object of a consultation by the government in 2014–2015.[5] At that time, the students had “to carry out one or two investigations from a small number set by the Awarding Organisation under highly controlled conditions”.[6] There was a general agreement that this assessment method would lead teachers to focus on only this narrow set of practicals, and should therefore be changed. In the following assessment system, students were supposed to plan, carry out, and analyse an investigation on a given scientific question (e.g., how springs stretch). They would make their investigation and write their report during school time. I had the opportunity to observe some lessons dedicated to this controlled assessment (or coursework), and I understand why the assessment system changed again. On paper, this looks great. Students would think of different methods to investigate a phenomenon, make some tests with these methods and choose then the better one and justify their choice — in brief, design their own investigation. But from a practical point of view, it is impossible to have 25–30 students thinking and setting up at least two different measurement methods of their choice. Even if they were all quiet, imaginative and motivated, it would be impossible with only one or two adults in the classroom to organise all this and provide for the necessary equipment. It means that teachers were obliged to ‘guide’ their students in such a way that all students do the same investigation. In the coursework I observed, students had to investigate how light is absorbed by paper by measuring the light transmitted through sheets of paper as a function of the number of sheets. They had to write a prediction of what they would observe. (I must say here that I have been surprised by the general habit, when students do a practical, to ask them to write a prediction of the outcome of their experiment. I understand that if you want them to verify a theory, then it is what they need to think of. But this is not necessarily the way scientists work, and it seems to me to be as valuable to have them make an experiment without having any idea of its outcome, and have them think afterwards of possible explanations.) They all wrote the prediction suggested by the head of physics: “I predict that if I double the number of sheets, the light intensity will be halved.” This is a nice and quantitative prediction, but unfortunately it is not correct. One can easily predict (when they have studied science at university) that light transmission will follow an exponential law. In this case, there is a certain number of sheets of paper for which the intensity of the transmitted light is half of that without any sheet, and with twice this number of sheets, then the transmitted light intensity is half of that with the previous number of sheets, hence a quarter of that without any sheet. So every time you add this particular number of sheets, the transmitted light is halved, but this is not true for any number of sheets. Nevertheless, no teacher in the school realised this mistake, and all GCSE students wrote this prediction in their report… This is only an anecdote, but it shows how a good idea on paper can lead to ludicrous outcomes where students do not necessarily think a lot about what they are doing. A change of the assessment system was therefore certainly necessary, but some raised concerns that this removal of the practical experiments from the GCSE science exam would mean fewer students studying science.[7] The argument put forward is that because practical skills are not assessed any more (which is not completely correct as it would be assessed via specific questions on papers), schools may decide to drop practical works altogether given their time pressure to cover the curriculum. In this debate, I completely agree that science practicals should be done at school, and possibly in the most intelligent way. But what I find worrying is that no one seems to denounce the fact that schools are driven by exams only…

[1] See e.g. Dweck, C. S., Mindsets and math/science achievement, 2008. Available on http://www.growthmindsetmaths.com/uploads/2/3/7/7/23776169/mindset_and_math_science_achievement_-_nov_2013.pdf

[2] See e.g. https://www.theguardian.com/education/2016/sep/12/tutor-11plus-test-grammar-schools-disadvantaged-pupils

[3] Roberts, D. A. (2007). LINNÉ SCIENTIFIC LITERACY SYMPOSIUM — Opening Remarks, In Linder, C., Östman, L., and Wickman, P.-O. (Eds.), Promoting Scientific Literacy: Science Education Research in Transaction, Linnaeus Tercentenary Symposium, Uppsala, Geotryckeriet.

[4] Orpwood, G. (2007). Assessing Scientific Literacy: Threats and Opportunities, p. 124. In Linder, C., Östman, L., and Wickman, P.-O. (Eds.), Promoting Scientific Literacy: Science Education Research in Transaction, Linnaeus Tercentenary Symposium, Uppsala, Geotryckeriet.

[5] https://www.gov.uk/government/consultations/assessing-practical-work-in-gcse-science

[6] http://www.gatsby.org.uk/uploads/education/reports/pdf/practical-science-policy-note.pdf, p. 1.

[7] https://www.theguardian.com/education/2015/jan/08/wellcome-trust-ofqual-lab-marks-gcse-exams-students

--

--

Alice Germain
Dr. Alice G. on Education

Maths content writer, qualified ‘Physics with Maths’ teacher, , Ph.D. in Physics, mum of 2.