Twenty-eight years ago, I passed my driving test. I recall little about the experience but can assume that I was asked to perform a hill start, to reverse around a corner, do a safe emergency stop and a few other well-rehearsed manoeuvres. All in one session, with no feedback until the end.
Had my instructor (Big Norman, to his friends) been asked to simply inform the examiner, with evidence, that I was capable of all of the above and more, it would not have been difficult (assuming he had a dash cam in his 1994 Ford Escort, which is unlikely).
In practice, I had indeed demonstrated the ability whilst also making numerous mistakes in my pursuit of perfection. Statistically, it would have taken me a couple of attempts to demonstrate the pass standard and on another day I may have erred as I had so often in practice. This is the difference between the 2021 system of assessment and that of 2019 and before. And now, certain schools are, as predicted, being accused of “gaming the system”. In other words, Big Norman is being accused of cheating or even lying.
“Candidates were not rank-ordered and schools were not working to the same criteria.”
In the spring of 2021, we were asked to design an assessment system for our schools based around certain criteria where different options could be taken. Class work could be used as could a school’s own exams or past papers from the exam boards. Re-takes were an option and the assessments could be taken at various different times. Once designed, the assessment strategies had to be approved. And they were. So, no standardisation between different centres and significantly more opportunities for students to demonstrate their ability.
Furthermore, the quota of grades for each centre which the algorithm of 2020 had been designed to deliver (based on past performance) had been scrapped and replaced with nothing. Candidates were not rank-ordered and schools were not working to the same criteria. It was all about producing the evidence that a candidate was able to perform to a certain level. That is what all schools had to do.
Remove standardisation. Remove random failure or having a bad day. Significantly increase the number of opportunities to demonstrate ability in smaller assessments. Assess using papers that students have already had access to (past papers are hardly top secret) and then return to the driving test analogy. How many of you reading this, who failed a driving test, feel pretty certain that you would have passed first time if the criteria was as described above?
“The dramatic rise in A*s in some schools may look very strange but small cohorts will inevitably produce more stark statistics.”
There is a broader issue at stake when responding to the press coverage of independent schools “gaming” the system. Who were we grading? Individual students or whole cohorts? There is a difference, when you think about it. Reduce it to a small ‘A’ Level class of 10 mathematicians. Where three might be a nailed-on certainty to get an A*, two could be borderline, capable of producing the goods some of the time and therefore a possibility of delivering in an exam.
If those two students demonstrate, in short assessments using past papers that they have already practised when working independently, that they can deliver an A* performance, should the school deny them that right in order to keep its A* percentage closer to 30 per cent than to 50 per cent? The latter represents a massive jump but that is not gaming the system. Rather, it is following the system that had been designed.
Easy headlines, drawing easy conclusions which blame an easy target are not new in journalism and we know that there are few who will leap to the defence of independent or selective schools, but that is how it is.
“Perhaps it would have been fairer to have told deserving students they wouldn’t get an A* because it would be too statistically awkward?”
Schools which have a selective intake, with smaller cohorts and the resources to continue teaching & learning throughout a pandemic, who experience fewer absences (staff and students) were at a massive advantage. Blend that with a system designed to enable success and remove barriers to failure and it looks to me that the press ought to be targeting their fury at the people who designed the system and failed to address inequality, rather than the schools who were always going to do well in that system.
The statistics do look rather startling, with some schools’ A* rates leaping dramatically between 2019 and 2021 but small numbers behave differently to large ones. It doesn’t make as theatrical a headline, but the multiplier between the A* ‘A’ Level rate in state schools in 2019 and 2021 is 2.78 whereas in independent schools it is 2.45. I’m no statistician, but that suggests that high performing students in state schools saw their grades rise more significantly.
“The slightest analysis of the process would conclude that it was structurally designed to enable students to succeed.”
“Grade inflation” was inevitable in 2021. The slightest analysis of the process would conclude that it was structurally designed to enable students to succeed. Schools who followed the process to the letter, as we all did, would have predicted a significant rise in top grades, which we all did.
The problem was the system and not schools exploiting the system. The dramatic rise in A*s in some schools may look very strange but small cohorts will inevitably produce more stark statistics. The national picture shows that the rise was across the board and that the multiplier between 2019 and 2021 was marginally higher, in the state sector.
Perhaps it would have been fairer to have told students that, despite demonstrating (in line with the criteria) the ability to gain an A*, they weren’t going to get one because it would be too statistically awkward? Depends on how you define “fair”, I suppose.