No Effect? Can computers be detrimental to achievement?
Recently, there have been a couple of research reports that have made headlines.
League tables:
Firstly, a report from the OECD discussing the social and economic benefits of every child reached a basic level of education. The whole report is well worth a read — moving past the headline grabbing league tables, which are actually tucked in the appendices and not the basis of the report and not least that for UK readers, the theory underpinning the research (endogenous growth theory) was actually headline grabbing in a political context a few years ago. (Although making me feel really old, now I look I see it was 1994! Maybe that’s why most of my twitter followers seemed not to understand the reference to “it wasn’t Brown’s… It was Balls!”).
But as an indication of the potential value of education, especially to poorer countries, is in my opinion really important.
Mobile Phones:
The second report concerned mobile phones. The headlines were clear, all along the lines of ‘Banning mobile phones increases results’. But when you actually read the report, then that is not what it says. What it says is that for those schools that introduce a mobile phone ban, then results afterwards were better then results before. Indeed, for the one school in the sample that had no ban, the results were better than all the other schools even after their bans had been put in place. The report does not say the year the results of that school were taken, so as far as we know, the results could have been better before the bans in other schools and over the same time period, increased even more! If the authors read this, they may want to let us know.
Other much more erudite authors have commented on other aspects of the report elsewhere. The difference between causality and correlation to the fore. See here and here.
No Effect? Can computers be detrimental to achievement?
So this brings me to the final report I want to mention. A report that whilst authored by one of the authors of the OECD report discussed earlier, has hardly come to the attention of headline writers and commentators anywhere. But the implications — for education and for countries and schools spending billions on technology are huge.
The researchers use 2011 TIMMS results to check how computers are used in classrooms maths and science lessons, as they want to check the quite common research findings that say use of computers has no effect. What they find is that it is the type of use that is important. They conclude that for maths and science:
“Our empirical results suggest that classroom computers are beneficial to student achievement when used to look up ideas and information but detrimental when used to practice skills and procedures”.
And following a commentary on opportunity cost, conclude that “Using computers for activities that do not improve student learning takes away time from activities that are potentially more effective to this end.”
So, giving students activities on the computer that just practice skills can be detrimental to achievement.
Think about that and then think about what we have had (this report was based on data collected in 2011) available to work with. Videos showing examples, or computer based widgets or graphing packages — great, those seem to fall into the ‘looking up ideas and information’ bracket in my opinion.
But then we get to setting practice and having it marked. OK, it’s nice and quick. We could get some nice red, amber, green tables. Some even add in automatic ‘levels’ based on the answers given.
But as maths teachers, don’t we want to see the working? Don’t we always want to check understanding and have students ‘organise their thinking’? And the best way to see this is to see the method the student has used. Could this lack or working and reliance on multiple choice and single answers questions be why such use is detrimental?
In England, the inspection body OFSTED recently published a really good report about mathematics.
They said:
“concern emerged around the frequent use of online software which requires pupils to input answers only. Although teachers were able to keep track of classwork and homework completed and had information about stronger and weaker areas of pupils’ work, no attention was given to how well the work was set out, or whether correct methods and notation were used.” (para 90)
In pretty much every examiners report on the GCSE mathematics examinations, comment is made about how students lose marks for not showing their working out.
Multiple choice abounds, and a huge industry (at more cost?) exists to make sue the tests are ‘fair’.
So on the one hand, teachers ask students to show their working. Then when sending students away to practice on computers, ignore that advice.
Isn’t there a problem here? Are we needing intervention classes, D/C borderline classes, intensive revision days because we are actually encouraging students NOT to show their working?
Maybe.
As a maths teacher, I never used computer software for practice. I never promoted the use of computer software for practice. And only now, as the technology is finally able to do the step by step marking required to enable students to show their understanding through their work, can I justify recommending computers for practicing mathematics.
It doesn’t matter how ‘adaptive’ programmes are. It doesn’t matter how many points students get, or whether they shoot the monster, see the alien shatter, drag and drop, sort, pick a,b,c,d, watch the flowers grow. If they have not understood the steps needed, and the teacher does not know whey they got something wrong, then in my opinion, the problems will remain.
Research:
And with regards to research, read the reports, not the reports about the reports, such as in the press and this blog. Look at the sample size and read carefully — now not wanting to teach you to suck eggs.
The final report I want to mention — but not in great detail — is supported by very well known mathematics educators — names you would recognise — but used just 2 classes in a single, selective and very high performing school for their data and quotes from 1 teacher. There are other criticisms I have with regards to methodology, but additionally, the advisory board were given shares in the company whose solution was being ‘researched’ in the report, which evidently is standard practice. Maybe someone more experienced than I am in the research world can confirm if giving shares to advisory board members of a report is standard practice. I find it quite amazing to hear, maybe I am just naïve.
But the usefulness of the report becomes much less knowing these facts.
And my opinions of those educators have diminished as well.