Did teachers rig phonics tests?

The Guardian reports that using phonics may have increased the number of children passing literacy benchmarks. But the most interesting thing in the article is the suggestion that, in previous years, when the Department for Education (DfE) published the pass mark, teachers may have rigged the tests. From the article:

The 2014 phonics check was the first in which the DfE refused to publish the pass mark before the test was administered. Interestingly, the data analysis shows that the clear spike in marks at precisely the pass mark in 2012 and 2013 has disappeared in 2014 – suggesting that some teachers had been tempted to rig the check in a pupil’s favour.

Schools and teachers get judged on how many pupils pass the tests and so there is massive incentive to cheat. It is interesting that they have stopped publishing the pass mark. Instead how about they just get rid of the pass mark altogether? What does it actually achieve? In 2014, 74 per cent passed. All that means is that 74 per cent were over an arbitrary mark and the rest under it. But that doesn’t tell us much about the distribution of abilities, nor much about the improvement in teaching or the relative merits of phonics over other methods.

For example, consider an extreme example. We’ll set the pass mark to be 60 per cent. There are two groups, 100 pupils in each.

  • Group A: half get 61 per cent and the other half get 1 per cent – so 50 per cent pass
  • Group B: half get 100 percent and the other half get 59 per cent – again, 50 per cent pass

Both groups have the exact same pass rate but there is a marked difference between the abilities of the two groups.

If I were a teacher and I wanted to optimise my pass rate I would concentrate my efforts not on the most able, because they are bound to pass, not on the least able, because they have no chance of passing, I’d concentrate efforts on those just below the pass mark to try and push them over the pass mark. I’d like to think that I wouldn’t behave in that way but it seems that many were doing that (or something worse) and the worst thing is, it is not the teachers’ fault. It is the system that pressures them to do these kind of things, even if it may be subconscious.

The real issue is how does knowing the pass mark actually help? It doesn’t. Testing is, of course, not all bad. Knowing an individual child’s ability at a skill is essential to know where the child need some help or stretching. But the pass rate for a class, school or nation of children provides no knowledge to improve teaching methods, materials and skills. The distribution of marks may help with that, but not a binary pass mark.

In general, if a target or measure is only going to be used to rank, judge or find fault and it doesn’t actually help with improvement, then it needs to be junked and the whole system of measurement re-examined.

In this case the DfE’s action not to publish the pass mark in 2014 is an admission of that.



If you found this interesting