I've struggled to find players that I've played during the last 15 years that have shown "unexplained" reductions in the their grades. Mine is almost unchanged and I'm just as many points below Mark Hebden (or Peter Sowray for that matter) as I ever was.So can anyone else come up with examples? Is there a particular area where it's more prevalent, or is my memory going?
There's some sort of problem at the sub 100 level where you seem to get pockets of (adult) players stuck with really low grades. Howard Grist referred to this at his own club. What he didn't indicate was whether they really are that bad or whether they are all improving but only ever play against the same really low rated players. Whilst we have some confidence that a 190 player is much better (9:1) than a 150 player who is correspondingly better than a 110 player, it's difficult to have quite the same trust in the grades of 90 v 50 v 10. At that level, it surely cannot be that rare to improve 50 points or more in a year. Over time this starts to cause problems further up the system when the 75 player (who should be 100) plays against the real 100 player.
A change in recent years is a switch to using an estimation program for new players rather than the grader's guestimate. By its nature an estimation program assumes constant strength over the measuring period - a suspect assumption for the rapidly improving player. Graders usually did their work at the end of a season so may have been more influenced by recent results.
For no particular reason other than it might work empirically, I'd suggest that all new players be given a minimum grade (for their opponents) of say at least 50. I think this would inject points at the foot of the system. It might not be as inflationary as it sounds because some of the new players (juniors) probably only play for a couple of years.
I've never been completely clear what the Hewitt investigations consisted of. It looks as if the estimation program was run with no pre-existing grades and then the results from this program were compared against the actual published grades (and didn't fit). As well as this, shouldn't there have been an analysis which looked at the result of every game and grouped them both by the grade difference and the absolute value of the player's grades? You would then have a statement that 125 players scored as well against 100 players as 200 players against 175 (or not). Perhaps the win/draw/loss ratios would be different. Maybe there has been such an analysis - but I don't think it's been published.