Progress 8 using 2017 grade conversions

Sean Hartford wrote an interesting blog post earlier this month on the OfSTED blog attempting to reassure us that OfSTED inspectors will

not request predictions for cohorts about to take examinations

but they

will just ask how schools have assessed whether their pupils are making the kind of progress they should in their studies and if they are not, what their teachers have been doing to support them to better achievement.

So… they won’t ask for predictions just how much progress they are making.

Anyway, I’ve given up trying to understand exactly what OfSTED want – I think it depends on the type of inspector you receive and the direction of the wind on the particular day of the inspection.

One thing they are clear on is that we shouldn’t be trying to ‘estimate’ progress 8 for pupils and subjects.  I do sort of understand why they say this – the actual measures can only be accurately produced AFTER that particular year group have sat the exam because the attainment 8 estimates used to proved a progress 8 measure are only finalised AFTER the exams have been sat, marked and graded.  However this is the measure we are going to be measured against and whilst we can still look (at least for current year 8 upwards) at levels of progress, some idea of a progress 8 would surely be useful to highlight whether particular groups are not performing in the way we would hope – in other words not making the amount of progress we would like them to be – and hence we could possibly target some intervention?

Last year a number of schools were hit when the actual P8 measures came out because they’d been using the previous years A8 data to ‘estimate’ that particular cohort and when the ‘real’ A8 figures came out there were some differences which meant a number of schools who had been confidently predicting decent P8 figures realised it wasn’t as good as they originally felt.  As a rule, similarly to grade boundaries on exam papers, year on year, these figures don’t alter much and have usually been good enough to give reasonable predictions of performance.  (Of course mathematics and English teachers have a double whammy this year because we haven’t really a clue WHAT a 4, 5 or 9 will actually ‘look like’)

This year however we have the first of 2 years where we have an amalgam of numerical grades (just English and mathematics this year and most subjects apart from legacy qualifications in 2018).  If you didn’t know already, in previous years (2016 being the last) G to A* was given a points score of 1 to 8 – nice and easy – to allow numerical calculations about performance and hence P8.  This means that in 2016, a C grade would be worth 5 points.

When P8 is calculated it is measured against an Attainment 8 estimate.  This is derived from a KS2 fine point score (based on mathematics and English/reading at the end of KS2).  These figures are available from the DfE.  The pupils sit their exams and a collection of 8 qualifications separated into 3 baskets is used to derive an ACTUAL attainment 8.  Basket 1 is mathematics and English (both double counted – though from 2017 onwards English will only be double counted IF a student sits BOTH language and literature).  Basket 2 is the EBacc basket containing the best 3 qualifications from the sciences (including computer science), Humanities (history and geography) and MFL.  Finally basket 3 is the open basket and the best 3 ‘other’ qualifications are counted here (this could include one or more of those which could have been counted in the first 2 baskets).  These grades are converted to numbers and added together to give an Attainment 8 figure.  The difference between this and their A8 estimate is then found, divided by 10 (because although only 8 qualifications are used the double counting of English and mathematics makes it look like 10) to give a progress 8 figure for that pupil.  Positive means they’ve done better than expected, negative worse.

Now in 2016 let us take an average pupil who gained a fine point score of 4.5 at KS2.  This gives them an A8 estimate of 47.85.  Let us now imagine they got a C in every qualification they sat.  This means, using a 5 for a C – which was the figure used in 2016 and prior years – this gives an actual A8 of 50.

This gives a difference of +2.15 which when divided by 10 gives +0.215 – on average a fifth of a grade better than predicted.  Quite healthy and if your schools P8 figure was this you’d be fairly happy I’d say!

Now let us suppose you are now trying to calculate a similar ‘estimate’ for a current pupil who will be taking examinations in 2017.  To make an estimate you’d have to use last years A8 estimate figures which, remember, are derived from the fact a C = 5.  In 2017 English and mathematics will use 1 to 9 and we are told the old grade C will be pegged at the new 4.  This means the same student used in the example for 2016 would gain 4 points for English and mathematics (double counted – assuming they’ve been entered for both language and literature).  All other grades are alphabetical.  The conversions that will be used are:

G = 1, F = 1.5, E = 2, D = 3, C = 4, B = 5.5, A = 7, A* = 8.5

So if they repeated their performance and got C’s across the board their A8 figure in 2017 (same performance as 2016) would be 40.  Measured against the previous cohort would give a difference of -7.85 which gives a P8 of -0.785.  Enough, if repeated, to get the phone call from OfSTED in September.

This highlights the big danger of using last years A8 estimates to estimate progress with your current cohorts.  Not only are you not comparing the same pupils (though in reality the year on year differences aren’t massive) you are not using the same grading structures.

Nowhere can you find the national data set for 2016 measured using the 2017 grade conversions to give at least a possible model for truly measuring how much progress are making.  However, and I’m not going to explain HOW I got this, but I have got a set of figures based on the national data set from 2016 which gives a reasonable measure of their fine point scores and what their A8 estimates WOULD have been using the C = 4, A* = 8.5 scale.  I’ve then, using the same proportions used in 2016, broken this figure down into the English, mathematics, EBacc and Open A8 estimates.  This can be downloaded by clicking this link.

Attainment 8 Estimates using National Data Set

Please remember however I take absolutely NO responsibility for the total reliability of this.  In the spirit of openness and the fact that we are all battling the same unbending system, I just thought I would share what I am using to make comparisons in my school.  All I know is that this data is based on 2016 exam performance measured using the 2017 grade conversions.  You are still not comparing like with like with the students and there is no account taken of the effect of the increased difficulty in the mathematics and English examinations – and any effects this may have on setting the C boundary on other subjects.  Use it at your peril!

We are looking at using this data to give us some indication of progress across the different cohorts on a subject by subject basis. Of course English and mathematics have their own A8 estimates but these are the only subjects that have specific subject A8 estimates.  For the others we are using the EBacc measure for science, humanities and MFL and the open measure for everything else.  Below is a mock up of what the mathematics picture might look like using this data.

Would love to hear what others are doing and happy to discuss our process further.

Save

Save

Please follow and like us:

Mr Chadburn

Leave a Reply

Your email address will not be published. Required fields are marked *