2010 Grading list

General discussions about grading.
Matthew Turner
Posts: 3340
Joined: Fri May 16, 2008 11:54 am

Re: 2010 Grading list

Post by Matthew Turner » Sat Jul 17, 2010 1:35 pm

Sean,
Some people thought that junior grades were inflated last year, the fact they have on average gone up this year in no way tests whether that hypothesis is true or false. You are assuming that this year's grade is an accurate reflection of a player's performance. However, given that it is the integrity of the grading system which is in question this seems a rather silly assumption.

Roger de Coverly
Posts: 19349
Joined: Tue Apr 15, 2008 2:51 pm

Re: 2010 Grading list

Post by Roger de Coverly » Sat Jul 17, 2010 1:44 pm

Sean Hewitt wrote:

2. Grading deflation does not appear to have occurred this year. Adults saw a small decrease in their grades, whilst overall the list saw a small increase.
Indeed. On the players in both lists including juniors (using your values), it's a change (inflation) of about 0.3.

(edit) Having thought about this one, it no longer matters what the junior grades are - because they won't be used in any future calculations. So the key point is a modest deflation in the adult grades of 0.3. The analyses seem to suggest it's occurring "above the mean" as well.

I do however recall a very long debate last year in which all the means (average grades) of the published grading lists were checked back to 1993 without disclosing any particularly strong deflationary or inflationary trend. Certainly not one which would justify reducing the gap between the "average" player and the "average" grandmaster by 25 points or more.

Someone commented on that thread that the mean wasn't that good a measure because of the possible biased impact of new players.

As regards the importance of individual grades, it is important though to get the top 100 or 200 correct.

I thought the "evidence" for the revaluation was not the movement of the means (which had been monitored every year and showed nothing conclusive) but the "Grist" graphs which showed underperformance against the straight line assumption. Have these been repeated?

Whether those involved with junior training will think it correct or desirable for (presumably) an improving player to plummet by 30 points remains to be seen.
Last edited by Roger de Coverly on Sat Jul 17, 2010 2:39 pm, edited 1 time in total.

Roger de Coverly
Posts: 19349
Joined: Tue Apr 15, 2008 2:51 pm

Re: 2010 Grading list

Post by Roger de Coverly » Sat Jul 17, 2010 1:52 pm

Matthew Turner wrote: Some people thought that junior grades were inflated last year, the fact they have on average gone up this year in no way tests whether that hypothesis is true or false. You are assuming that this year's grade is an accurate reflection of a player's performance. However, given that it is the integrity of the grading system which is in question this seems a rather silly assumption.

When comparing junior grades, I think we need to know when we are comparing

(a) last year's grade
(b) last year's grade plus last year's junior weighting
(c) this year's grade
(d) this year's grade plus this year's junior weighting

Sean Hewitt

Re: 2010 Grading list

Post by Sean Hewitt » Sat Jul 17, 2010 2:08 pm

Roger de Coverly wrote: I do however recall a very long debate last year in which all the means (average grades) of the published grading lists were checked back to 1993 without disclosing any particularly strong deflationary or inflationary trend. Certainly not one which would justify reducing the gap between the "average" player and the "average" grandmaster by 25 points or more.

Someone commented on that thread that the mean wasn't that good a measure because of the possible biased impact of new players.
Just to correct your selective memory (must be tired after that large portion of pie :D ), the analysis that I have posted today does not include any new players for exactly this reason. The previous debate was around the use of mean averages for the entire list (including new players) which is indeed meaningless for the reasons that you state.

Ian Thompson
Posts: 2719
Joined: Wed Jul 02, 2008 4:31 pm
Location: Awbridge, Hampshire

Re: 2010 Grading list

Post by Ian Thompson » Sat Jul 17, 2010 2:22 pm

Matthew Turner wrote:Sean,
Some people thought that junior grades were inflated last year, the fact they have on average gone up this year in no way tests whether that hypothesis is true or false. You are assuming that this year's grade is an accurate reflection of a player's performance. However, given that it is the integrity of the grading system which is in question this seems a rather silly assumption.
A statistic that might help with this is to calculate actual performance in the last year (POINTS/GAMES in the grading list) and compare it with both last years published grade (GRADE1) and this year's published grade (GRADE). I would have thought that if the differences between junior players' performances and new published grades was generally small, after allowing for junior increments, you'd have more confidence in the grading system than if it was often large. I'd tell you the answer myself, except that my copy of the provisional grading list came with a statement that it was to be treated as confidential until published by the ECF.

Sean Hewitt

Re: 2010 Grading list

Post by Sean Hewitt » Sat Jul 17, 2010 2:42 pm

Ian Thompson wrote:
Matthew Turner wrote:Sean,
Some people thought that junior grades were inflated last year, the fact they have on average gone up this year in no way tests whether that hypothesis is true or false. You are assuming that this year's grade is an accurate reflection of a player's performance. However, given that it is the integrity of the grading system which is in question this seems a rather silly assumption.
A statistic that might help with this is to calculate actual performance in the last year (POINTS/GAMES in the grading list) and compare it with both last years published grade (GRADE1) and this year's published grade (GRADE). I would have thought that if the differences between junior players' performances and new published grades was generally small, after allowing for junior increments, you'd have more confidence in the grading system than if it was often large. I'd tell you the answer myself, except that my copy of the provisional grading list came with a statement that it was to be treated as confidential until published by the ECF.
I don't think stat that tells us anything other than if you play less than 30 games then the fewer games you play, the less accurate your game will be. That was the case both pre and post the grading fix.

BTW - My grading list asked me to keep the grades confidential and I've done so.

Ian Thompson
Posts: 2719
Joined: Wed Jul 02, 2008 4:31 pm
Location: Awbridge, Hampshire

Re: 2010 Grading list

Post by Ian Thompson » Sat Jul 17, 2010 3:01 pm

Sean Hewitt wrote:
Ian Thompson wrote:A statistic that might help with this is to calculate actual performance in the last year (POINTS/GAMES in the grading list) and compare it with both last years published grade (GRADE1) and this year's published grade (GRADE). I would have thought that if the differences between junior players' performances and new published grades was generally small, after allowing for junior increments, you'd have more confidence in the grading system than if it was often large. I'd tell you the answer myself, except that my copy of the provisional grading list came with a statement that it was to be treated as confidential until published by the ECF.
I don't think stat that tells us anything other than if you play less than 30 games then the fewer games you play, the less accurate your game will be. That was the case both pre and post the grading fix.
Two common uses of the grading list are:
1. To determine eligibility for grading restricted events - organisers use GRADE for this
2. To calculate the performance of opponents - the ECF uses a junior's POINTS/GAMES for this

If there is a big difference between GRADE and POINTS/GAMES then I think something is wrong with the grading system, as no more than one of them can be an accurate estimate of a junior's actual playing strength.

Roger de Coverly
Posts: 19349
Joined: Tue Apr 15, 2008 2:51 pm

Re: 2010 Grading list

Post by Roger de Coverly » Sat Jul 17, 2010 3:16 pm

Sean Hewitt wrote: Just to correct your selective memory (must be tired after that large portion of pie :D ), the analysis that I have posted today does not include any new players for exactly this reason. The previous debate was around the use of mean averages for the entire list (including new players) which is indeed meaningless for the reasons that you state.

In practice the use of the entire list is usually a reasonable proxy for the stayers and was the measure used by the BCF/ECF graders for many years to monitor the grading list.

If it had been my decision, I would have told those who wanted to rebuild the grading system to do something useful with it, like convert to Elo, on-line reporting or more frequent lists. The Grist graphs were interesting but not supported by other measures such as the averages from historic lists.

For what it's worth, the same analysis on the 2008 list shows the adult stayers dropping from a mean of 116.6 to 116.0 (7984 players). This an analysis just done, not one that the advocates of the deflation theory ever presented. Also it's mostly the higher rated players showing the falls with the average drop at least a point all the way down to 110. A-rated players did relatively better with an average loss of just 0.1 overall.

Sean Hewitt

Re: 2010 Grading list

Post by Sean Hewitt » Sat Jul 17, 2010 3:25 pm

Ian Thompson wrote: If there is a big difference between GRADE and POINTS/GAMES then I think something is wrong with the grading system, as no more than one of them can be an accurate estimate of a junior's actual playing strength.
I see what you're saying. The fact that higher graded players play more than lower graded players was identified as an issue although I didn't think that there was much that could be done about it (apart from reducing the number of games required for a full grade from 30, and that has it's own problems!). It seems to simply be a feature of the Clarke system.

At the individual player level, this is simply a factor of games played in the current season. eg

Take a player who plays 30 games, and accumulates 3000 points. He gets a grade of 100.

The following season he plays 5 games, and accumulates 350 points. He would get a grade of 95.

The difference between GRADE and POINTS/GAME is 25 [undoubtedly a big number]. The more games a player plays, the less this number is (and above 30 it's less than 0.5). If this is a problem, then the whole Clarke method must go out of the window.

The whole grading list has a mean average of 133.3 and a median of 133. POINTS/GAME for the whole list is 139.8 - a difference if 6.6, including junior supplements. I don't know if this is big, small or about right. Either way, I'm not sure that much could be done to change it, even if desirable.

Sean Hewitt

Re: 2010 Grading list

Post by Sean Hewitt » Sat Jul 17, 2010 3:38 pm

Roger de Coverly wrote: In practice the use of the entire list is usually a reasonable proxy for the stayers and was the measure used by the BCF/ECF graders for many years to monitor the grading list.
No wonder the monitoring of the grading list was so poor and was allowed to get into such a mess if this is all they did.
Roger de Coverly wrote:If it had been my decision, I would have told those who wanted to rebuild the grading system to do something useful with it, like convert to Elo, on-line reporting or more frequent lists.
I agree entirely. But council and would never have gone for ELO sadly, and had already rejected more frequent lists.
Roger de Coverly wrote:For what it's worth, the same analysis on the 2008 list shows the adult stayers dropping from a mean of 116.6 to 116.0 (7984 players). This an analysis just done, not one that the advocates of the deflation theory ever presented.
Thanks for confirming that the rate of adult decay has halved as a result of the grading fix. Excellent news.
Roger de Coverly wrote:Also it's mostly the higher rated players showing the falls with the average drop at least a point all the way down to 110. A-rated players did relatively better with an average loss of just 0.1 overall.
Of course! Both the "Grist" graphs showed and my own analysis showed that higher graded players were unable to achieve the results required against lower graded opposition to maintain their grades . The fact that these higher graded players have not seen their grades go down at the same rate this year shows that the grading fix has gone some way to correcting this problem. I expect the Grist graphs will show that the performance of players this year is more closely predicted by their grades than in previous years.

Roger de Coverly
Posts: 19349
Joined: Tue Apr 15, 2008 2:51 pm

Re: 2010 Grading list

Post by Roger de Coverly » Sat Jul 17, 2010 4:10 pm

Sean Hewitt wrote: Of course! Both the "Grist" graphs showed and my own analysis showed that higher graded players were unable to achieve the results required against lower graded opposition to maintain their grades .
The practical point though was that if you looked at the time series of grades for the players who had been in the system for a long time, then their personal series showed little signs of deflation. If the "point a year" theory was correct, then adult players in the 170s in the 1970s would have been in the 130s by now.

I think there's a hidden issue here - that under-performance as measured by the Grist graphs may not in fact have any long term effect on the published grading numbers. Simple enough - if you score 58% when you should score 60% and 42% when you should score 40% then provided you are "in the middle", your grading number isn't going anywhere. So you play half your games against players 10 points above you scoring 42% and the other half against players 10 points below you scoring 58%, then on average you've scored 50% against a field equal to your grade so your number doesn't change. Your games will however show up as anomalies on the Grist graph.

Ian Thompson
Posts: 2719
Joined: Wed Jul 02, 2008 4:31 pm
Location: Awbridge, Hampshire

Re: 2010 Grading list

Post by Ian Thompson » Sat Jul 17, 2010 4:56 pm

Sean Hewitt wrote:The difference between GRADE and POINTS/GAME is 25 [undoubtedly a big number]. The more games a player plays, the less this number is (and above 30 it's less than 0.5). If this is a problem, then the whole Clarke method must go out of the window.
I don't think the Clarke method is the problem. The problem is that for some purposes only the last year's results are taken into account for a junior, and for other purposes up to 3 years worth of results are taken into account. This leads to anomalous situations, e.g.

1. Junior A's published grade has gone up by 1 point in the new 2010 grading list, but his performance in the last year was 28 points less than his new grade, and its that lower number that is used to calculate his opponents' performances.
2. Junior B's published grade has gone down by 2 points in the new 2010 grading list, but his performance in the last year was 51 points more than his new grade, and its that higher number that is used to calculate his opponents' performances.

My argument is that if I can find anomalies in the grading system, then the system may be flawed. To prove the system is sound its not sufficient to show that, on average, the differences even out.

Roger de Coverly
Posts: 19349
Joined: Tue Apr 15, 2008 2:51 pm

Re: 2010 Grading list

Post by Roger de Coverly » Sat Jul 17, 2010 5:25 pm

Ian Thompson wrote:1. Junior A's published grade has gone up by 1 point in the new 2010 grading list, but his performance in the last year was 28 points less than his new grade, and its that lower number that is used to calculate his opponents' performances.
Expressed another way, you can look up a junior's 2009 grade and next week you may be able to look up their 2010 grade. Neither is much help if you want to audit your personal grading calculation, since you will have to wait for the direct member's print in order to find out what they counted as when you played them. I know it's the same as for ungraded players but their new grade (if they have one) is usually a proxy for the grade they were deemed to have when you played them.

The general feeling last year was that the adult list was more or less in the right order as was the junior list. There were doubts about whether the two lists merged correctly though. As it was almost September last year before there was a final list, everyone wanted to get on with the new season, so the issue died.

Sean Hewitt

Re: 2010 Grading list

Post by Sean Hewitt » Sat Jul 17, 2010 6:29 pm

Ian Thompson wrote:The problem is that for some purposes only the last year's results are taken into account for a junior, and for other purposes up to 3 years worth of results are taken into account. This leads to anomalous situations, e.g.

1. Junior A's published grade has gone up by 1 point in the new 2010 grading list, but his performance in the last year was 28 points less than his new grade, and its that lower number that is used to calculate his opponents' performances.
2. Junior B's published grade has gone down by 2 points in the new 2010 grading list, but his performance in the last year was 51 points more than his new grade, and its that higher number that is used to calculate his opponents' performances.
I don't know what grade the junior would be counted as in the situations you describe. But if it is as you suggest these scenario's could only occur where the player concerned has played only a small number of games, and the more games a player plays the less likely this discrepancy would be to occur.

A quick look shows that 170 juniors have a performance at least 10 points smaller or larger than their grade (after deducting the junior increment). These juniors played 1700 games out of the total of 205594 ; just 0.8% of the total of graded games played last year.
Ian Thompson wrote:My argument is that if I can find anomalies in the grading system, then the system may be flawed.
An anomaly neither proves nor disproves anything. Anomalies have to have a material effect on the overall product for the system to be flawed.
Ian Thompson wrote:To prove the system is sound its not sufficient to show that, on average, the differences even out.
Well, I agree but that's hardly new. I think that I have said exactly that for the last 4 years. Sadly, not everyone appears to agree.

isaac wallis
Posts: 36
Joined: Sat Jul 25, 2009 7:32 pm

Re: 2010 Grading list

Post by isaac wallis » Sat Jul 17, 2010 9:21 pm

Personally I'm most impressed with the adult who's gone up 54 points. That can't happen very often. It's not Jonathan Hawkins is it? :lol:

Post Reply