GRADING ANOMALIES

General discussions about ratings.
User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Mon Jun 25, 2007 12:01 pm

My proposal for a new ECF grading system...

Rules...

(Corrected Grading System) CGS's rule reads...

Rule CGS: For a win you score your opponent's grade plus 50; for a draw, your opponent's grade; and for a loss, your opponent's grade minus 50. Note that, if your opponent's grade differs from yours by more than 50 (not 40) points, it is taken to be exactly 50 (not 40) points above (or below) yours. At the end of the season an average of points-per-game is taken, and that is your new grade.

(Amended Grading System) AGS's rule reads...

Rule AGS: For a win you score average grade plus 25; for a draw, average grade; and for a loss, average grade minus 25. Note that, if your opponent's grade differs from yours by more than 50 points, it is taken to be exactly 50 points above (or below) yours. Average grade is half of the sum of your and your opponent's grade. At the end of the season an average of points-per-game is taken, and that is your new grade.

Grade calculation steps...

Step 1: Calculate new grades for juniors. For this calculation use CGS's rule if junior plays a non-junior and AGS's rule if junior plays a junior. These grades will be the new junior grades (and may be corrected in step 4 below).

Step 2: Calculate average junior grades. This grade is calculated as an average (arithmetic mean) of the old grade in a previous season and a new grade calculated in step 1 (i.e. old grade plus new grade divided by two). Calculation is done only for the juniors.

Step 3: Calculate new grades for non-juniors. For this calculation use AGS's rule. The junior grades for this calculation are taken to be the average grades calculated in step 2. These grades will be the new non-junior grades (and may be corrected in step 4 below).

Step 4: Correct the calculated grades of less active (junior and non-junior) players. For players (both junior and non-junior) who played less than 30 games in the season the new grade is taken to be an average (arithmetic mean) of the old grade (in previous season) and a new grade calculated in step 1 (juniors) or step 3 (non-juniors) (i.e. old grade plus new grade divided by two).

Note: if a player does not have a grade in a previous season the grade is estimated.

Note: only games played in the current season should be used for grade calculation, and no games from previous seasons should be taken into account.

Facts...

The above proposed grading system should...

1. make juniors' grades change more rapidly then non-juniors' grades,

2. address the problem of grade deflation caused by rapidly improving juniors (non-juniors would not lose so many points by losing to rapidly improving juniors),

3. make the grade of more active players change more rapidly than the grade of less active players (performance of more active player should be statistically more significant than that of less active player),

4. imposing a limit of 50 grading points difference after which a stronger player does not gain any more points even if he or she wins (addressing a problem of possible inflation of grades of stronger players and a possible deflation of grades of weaker players, which I think might be present in the current grading system, GS),

5. should be mathematically sound (the total grade is not preserved but this was done deliberately in order to address the grade deflation problem caused by rapidly improving juniors and to account for statistical significance on players' performance in relation to number of games played in the season; each separately, CGS and AGS, are mathematically sound and preserve the total grade).

Please...

I kindly beg if relevant people could consider adopting and implementing the above mentioned grading system. Thanks a lot.

My idea was to improve the grading system using a sound mathematical basis while at the same time keeping the calculation as simple as possible and as similar as possible to the present one.

The targeted season for the system implementation could be 2007/2008.

Please do consider it.

Thanks a lot.
Robert Jurjevic
Vafra

Howard Grist
Posts: 84
Joined: Tue Dec 12, 2006 1:14 pm
Location: Southend-on-Sea

Post by Howard Grist » Mon Jun 25, 2007 2:16 pm

Robert,

I am responsible for the ECF central grading program which perfroms the grading calculations.
Rule CGS: For a win you score your opponent's grade plus 50; for a draw, your opponent's grade; and for a loss, your opponent's grade minus 50. Note that, if your opponent's grade differs from yours by more than 50 (not 40) points, it is taken to be exactly 50 (not 40) points above (or below) yours. At the end of the season an average of points-per-game is taken, and that is your new grade.
As I previously mentioned, I believe this is too generous to the weaker player as the stronger player only scores about 98% in this situation. That being said I believe, on the basis of some (but not much) analysis that the current 40 point rule is too generous to the stronger player. So there could be a change here.
Rule AGS: For a win you score average grade plus 25; for a draw, average grade; and for a loss, average grade minus 25. Note that, if your opponent's grade differs from yours by more than 50 points, it is taken to be exactly 50 points above (or below) yours. Average grade is half of the sum of your and your opponent's grade. At the end of the season an average of points-per-game is taken, and that is your new grade.
The AGS simply halves the grading change that you would receive under the current system. If you have a group of 30 players with an average grade of 120 and a player graded 100 comes along, scores 50% against them then he will have a grade of 110 next season. If a player graded 140 comes along and scores 50% against this same group of players he will have a grade of 130. Surely the available evidence tells you that these two players are the same playing strength?
Step 4: Correct the calculated grades of less active (junior and non-junior) players. For players (both junior and non-junior) who played less than 30 games in the season the new grade is taken to be an average (arithmetic mean) of the old grade (in previous season) and a new grade calculated in step 1 (juniors) or step 3 (non-juniors) (i.e. old grade plus new grade divided by two).
This new rule is very odd and will lead to grades changing by a quarter of the current amount for the 85% in the grading list that don't play 30+ games per year. You now have the 100 player in the above scenario playing 29 games against 120 strength players acoring 50% and coming away with a grade of 105.
Facts...

The above proposed grading system should...

2. address the problem of grade deflation caused by rapidly improving juniors (non-juniors would not lose so many points by losing to rapidly improving juniors),
You state this as a fact, but I don't see any proof in this thread. Have you done any analysis on whether these rules would produce a sensible grading system? The only analysis you have quoted is on two players playing each other in a match, which simply doesn't happen in this country, and is also the most inappropriate example to test a statistical theory which implicitly relies on large samples of data.

Howard

User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Mon Jun 25, 2007 4:00 pm

Dear Howard,
Howard wrote:I am responsible for the ECF central grading program which performs the grading calculations. ...
Thanks for your reply.

If you think that my proposed grading system is not any better or (as I understand from your reply) even worse than the present grading system, fair enough. Surely, if that was the case then ECF should stick to the present grading system (GS).

May I ask if you think that in order to be accepted the proposed grading system would need a small modification (as for example say omitting Step 4) or you rather think that there is no justification for accepting the new system? Thanks.
Howard wrote:As I previously mentioned, I believe this is too generous to the weaker player as the stronger player only scores about 98% in this situation. That being said I believe, on the basis of some (but not much) analysis that the current 40 point rule is too generous to the stronger player. So there could be a change here.
This is only a matter of definition of grade, the limit can be 50 (CGS, AGS), near 120 (ÉGS), exactly 120 (NGS) or Infinity (GS), I see no justification for being Infinity (one of my previous post details the discussion on this matter).
Howard wrote:The AGS simply halves the grading change that you would receive under the current system. If you have a group of 30 players with an average grade of 120 and a player graded 100 comes along, scores 50% against them then he will have a grade of 110 next season. If a player graded 140 comes along and scores 50% against this same group of players he will have a grade of 130. Surely the available evidence tells you that these two players are the same playing strength?
I think that I've elaborated why AGS is better than CGS or GS. In a nut shell AGS uses 'k=1/2' and CGS 'k=1' ('k=1/2 is better'). Or in other words, if a player does not perform as expected, according to AGS that is because of the change in chess ability of both the player and his or her opponents, and according to CGS (and GS) it is solely due to the change in chess ability of the player (which makes no sense). (One can use CGS in junior case if the change of non-junior's chess ability can be neglected in comparison to the junior's, this is to address the junior grade deflation problem.)
Howard wrote:This new rule is very odd and will lead to grades changing by a quarter of the current amount for the 85% in the grading list that don't play 30+ games per year. You now have the 100 player in the above scenario playing 29 games against 120 strength players acoring 50% and coming away with a grade of 105.
Okay, you may be right here. I wanted to address the problem of player activity and consequently statistical significance of their performance (in a season), but maybe a better approach might be to take at least 30 games in calculation and if necessary to take the games from previous season or seasons.
Howard wrote:You state this as a fact, but I don't see any proof in this thread. Have you done any analysis on whether these rules would produce a sensible grading system? The only analysis you have quoted is on two players playing each other in a match, which simply doesn't happen in this country, and is also the most inappropriate example to test a statistical theory which implicitly relies on large samples of data.
Junior grades (when playing non-juniors) would be calculated using CGS ('k=1' means faster grade change) and non-junior grades (when playing junior and non-juniors) would be calculated using AGS (k=1/2' means slower grade change). This would make a grading system which does not conserve a total grade, but should address (to some extend) the junior problem.

I've done all mathematical proofs, and AGS would work (and it is better than CGS and GS). I did not prove mathematically the proposed deviations (i.e. the mixture using CGS and AGS), but they should work and most importantly address the current problems of grade deflation caused by rapidly improving juniors. Please note that if the system works for two player matches it should work for one player playing a number of opponents in a season, the two player match examples were used only to make the points more obvious.
Robert Jurjevic
Vafra

Howard Grist
Posts: 84
Joined: Tue Dec 12, 2006 1:14 pm
Location: Southend-on-Sea

Post by Howard Grist » Tue Jun 26, 2007 12:31 am

Robert,
Robert Jurjevic wrote: May I ask if you think that in order to be accepted the proposed grading system would need a small modification (as for example say omitting Step 4) or you rather think that there is no justification for accepting the new system? Thanks.
I believe there is no justification for accepting your new system. The main requirement of a rating system is to place players in order of ability, but a good rating system can also be used to predict results - e.g. in a match the two teams have a grading difference of 20 points per board, then how many points would you expect the stronger team to score?

I believe that the current ECF system does a good job in placing players in order of ability but it's not doing quite so well when used as a result predictor - in the example quoted you may expect the stronger team to score 70%, but looking at actual results, they score more like 65% and this is why the whole 'Grading Anomalies' issue has been raised. As under your suggested system grades change more slowly, I think it would do less well than under both of these measures.
Robert Jurjevic wrote: I think that I've elaborated why AGS is better than CGS or GS. In a nut shell AGS uses 'k=1/2' and CGS 'k=1' ('k=1/2 is better'). Or in other words, if a player does not perform as expected, according to AGS that is because of the change in chess ability of both the player and his or her opponents, and according to CGS (and GS) it is solely due to the change in chess ability of the player (which makes no sense).
This is where the statistics comes in. If two players play a long match which is drawn all you can reasonably say is that they are of comparable strength. If a player plays a large sample of opponents and scores 50% against them, then you can say, with a reasonable degree of confidence, that the difference between the player's expected score and his actual score is solely down to change in the ability of the player - the large sample effectively removes the change in abilities of the opponents. This is why you shouldn't base a grading system on two-player matches.

Howard

Paul Dargan
Posts: 526
Joined: Sun May 13, 2007 11:23 pm

Post by Paul Dargan » Tue Jun 26, 2007 8:38 am

Howard,

Thanks for getting involed in the thread.

Could you look at my earlier posts and let us know whether any work has been done on expected scores for different grading gaps - and indeed whether results depend solely on the gap between grades or also on the absolute strength of players in voled i.e. do 120's score the same against 100's as 220's do agaionst 200's?

As I said previously - the place to start is with some analysis of the existing (large) pool of data.

Surely grade deflation is caused by people leaving the system with more points than they joined it (i.e. people stop playing at ~130 having started at 90, taking the points with them)

Regards,

Paul

User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Tue Jun 26, 2007 9:21 am

Howard wrote:I believe there is no justification for accepting your new system.
I have asked Dave Welch (who took interest in this debate) to have a look at the proposals (I understand he is a mathematician). I have spoken with two of my colleagues from my local chess club (one of them is a mathematician) and both understood the ideas behind the proposed change (it should be clear that AGS is better than CGS and GS, and equally good to ÉGS or NGS, it would be nice if we could also address the junior problem and possibly the problem of player activity).
Howard wrote:This is why you shouldn't base a grading system on two-player matches.
Who says that I did?
Howard wrote:I believe that the current ECF system does a good job in placing players in order of ability but it's not doing quite so well when used as a result predictor.
If all players would always perform as expected (assuming that their results and grades are statistically significant) that would mean that their relative chess abilities do not change (simple as that).
I think that no one should assume that players must perform in real life as expected (according to the grading system and its grade definition), and even less so, if they don't, that this is because grade was 'wrongly' defined.
Robert Jurjevic
Vafra

User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Wed Jun 27, 2007 11:57 am

I was wrong...

I was wrong when saying that...

I understand that some people are trying to find 'flaws' in correct grading systems by performing statistical experiments with models of real players and real games, or by analysing how real players perform in real games. In my opinion this is impossible!

Thanks to Professor Mark E Glickman (the inventor of Glicko rating system), who was kind enough to exchange a couple of e-mails with me, I have realized that...

...statistical analysis for rating systems makes sense if one can find two grading systems G1 and G2 such that abs(p - q) = 0 for G1 and abs(p - q) /= 0 for G2, where p is expected and q actual performance.

Or in general, if one calculates abs(p - q) on a large sample the grading system with smaller abs(p - q) fits better the actual data and should be regarded as 'better', as that is how the system behaves.

Looks like my mistake was in assumption that p = f(d) can be chosen arbitrarily and that is independent of actual data. But now it looks obvious to me that according to...

s1 = r1 + k*(q - p);
s2 = r2 + k*((100-q) - (100-p)) = r2 - k*(q - p);

...one can find a grading system where s1 = r1 and s2 = r2 and a grading system where s1 /= r1 and s2 /= r2, and that q defines (if the sample is big enough) how p should be defined, as for a really large sample it should be s1 = r1 and s2 = r2.

Top list...

So, as p = f(d) is not an arbitrary choice the order of the mentioned grading systems from the best to the worse is as follows...

1. ÉGS (the best p = f(d); k=1/2)
2. NGS (good approximation of ÉGS's p = f(d); k=1/2)
3. AGS (fair approximation of ÉGS's p = f(d); k=1/2)
4. CGS (fair approximation of ÉGS's p = f(d); k=1)
5. GS (poor approximation of ÉGS's p = f(d); k=1)

Élo grading system...

Maybe ECF could adopt national Élo rating system... if grading is done once a year a care should be taken about factor k, I think.

Glicko grading systems...

Maybe ECF could adopt Glicko rating system... details on the link below...

http://math.bu.edu/people/mg/glicko/gli ... licko.html

...or Glicko 2 rating system... details on the link below...

http://math.bu.edu/people/mg/glicko/gli ... ample.html

Glicko 2 appears to be the best rating (grading) system around...
Robert Jurjevic
Vafra

Sean Hewitt

Post by Sean Hewitt » Wed Jun 27, 2007 1:06 pm

Robert,

Sometimes you have to accept what is doable politically, regardless of what is most accurate. Leaving aside for the moment the debate about the most accurate rating system what you have to understand that there is a reluctance amongst a significant number (and I can't quantify how many there are) to change from the ECF grading system to something else. They would probably put up with the ECF system being fixed.

If the current system were to be disguarded, the vast majority would only want to see us change to ELO as it is what almost everyone else uses. The reluctants might also put up with this change, but it can't be sensible to change to another stand alone system.

To use an analogy. We have a broken Betamax video recorder (aka ECF grading system). We can fix it, which may or may not be economical with no guarantee that it wont break again in the future. Or we can go out and buy a new, VHS machine (the ELO system). We should not invent a new format!!!

User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Wed Jun 27, 2007 2:22 pm

Dear Sean,
Sean Hewitt wrote:Sometimes you have to accept what is doable politically, regardless of what is most accurate. Leaving aside for the moment the debate about the most accurate rating system what you have to understand that there is a reluctance amongst a significant number (and I can't quantify how many there are) to change from the ECF grading system to something else. They would probably put up with the ECF system being fixed.
Right, I see.
Sean Hewitt wrote:If the current system were to be discarded, the vast majority would only want to see us change to ELO as it is what almost everyone else uses. The reluctant might also put up with this change, but it can't be sensible to change to another stand alone system.
Rigt, then switching to Élo could be the easiest route (if the current system were to be discarded), although the superiority of Glicko 2 system makes it tempting. Glicko 2 addresses two tings which Élo does not, 1. statistical significance of one's grade (if one does not play often his or her grade is not trusted as much as the grade of one's who plays a lot), 2. rapidly changing players (keeps track and takes into account how rapidly one's grade changes, this would resolve the grade deflation problem caused by rapidly improving juniors, and any other problem which may arise from players whose chess abilities rapidly change).
Robert Jurjevic
Vafra

User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Wed Jun 27, 2007 2:25 pm

Glicko 2 system...

Here is what Professor Mark E Glickman has written in an e-mail to me...

The Glicko and Glicko-2 systems are in the public domain, so they are free to use. If someone wants my help/time in implementing them, then I would ask to be paid.

I don't know if you're aware, but the Australian Chess Federation has adopted the Glicko-2 system for rating tournaments in Australia. Last I heard they've been having a very positive experience with it. Glicko-2 is probably better to use than Glicko because it handles rapidly improving players in a much more principled manner.

A benefit of the Glicko system is that the term corresponding to k in the Elo system accounts for the precision resulting from the number of games played. Glicko would work if calculated only once a year, though it is more accurate if calculated more frequently to account for players whose abilities are changing more quickly.


When I asked what about adjusting Glicko and Glicko 2 for grade calculation once a year (rather than after every tournament or other chess event) Professor Mark E Glickman had replied...

The problem isn't an issue of making a numerical correction, but recognizing that players' abilities may be changing over time in a way that the underlying model doesn't recognize. In other words, the Glicko system assumes that a player's ability may be changing, but in a smooth way. If a player rapidly improves over the course of a year, and ratings are only calculated once a year, then earlier poor results and the recent good results will cancel each other out, in effect, which is not what you want.

The Glicko links...

http://math.bu.edu/people/mg/glicko/gli ... licko.html

http://math.bu.edu/people/mg/glicko/gli ... ample.html

If computer program claudicates the grades why not (ignore calculation complexity and) go for the best, Glicko 2?

P.S. Glicko and Glicko 2 use the same relation between expected performance and grade difference as Élo. The difference between Glicko 2 and Glicko, as I understand, is that Glicko 2 addressed a problem of rapidly changing players, while Glicko doesn't.
Robert Jurjevic
Vafra

Sean Hewitt

Post by Sean Hewitt » Thu Jun 28, 2007 12:45 pm

I agree that Glicko is a superior system to ELO. But when the rest of the world uses ELO, the only real course of action for us (if we ditch the ECF system) is to use ELO.

Unless the rest of the world changes to Glicko too!

User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Thu Jun 28, 2007 3:43 pm

Sean Hewitt wrote:I agree that Glicko is a superior system to ELO. But when the rest of the world uses ELO, the only real course of action for us (if we ditch the ECF system) is to use ELO. Unless the rest of the world changes to Glicko too!
Why England would not be at the cutting edge?

If people do not want to break up with the tradition...

...I could advise ÉGS2, which would be an extension of ÉGS (Élo grading system scaled for ECF grades and with 'k=1/2') with variable 'k' in order to address the problems of player activity (the grades of inactive players are less reliable and therefore their grade should change more rapidly than the grades of active players) and rapidly changing players (the grade of rapidly changing players should change more rapidly than that of other players, so that their grades match more closely their real chess abilities).

...or alternatively ECF could simply adopt AGS (which would be better than the present system GS or CGS) and, as the only change would be in the rule of how the grades are calculated, switching to a new system should be straightforward.
Robert Jurjevic
Vafra

User avatar
Greg Breed
Posts: 723
Joined: Thu Apr 05, 2007 8:30 am
Location: Aylesbury, Bucks, UK

Post by Greg Breed » Thu Jun 28, 2007 4:28 pm

I wouldn't want to do the internal club gradings using the Glicko(2) system. The beauty of the current ECF system is that it is simple enough for someone with only basic maths ability (like me) to be able to work out performances. I've tried Roberts system of using the average between players with +/- 25 and i managed to work that out on Excel also.

Still, I would think that the best solution would be to use the system best suited to the purpose. I don't understand the Glicko system, but if it most accurately calculates players grades then surely that should be chosen.
Robert Jurjevic wrote:Why England would not be at the cutting edge?
Sean: If everyone drove a Lada but you could have a Porsche for the same amount you'd go with the Porsche... wouldn't you? Wouldn't everyone?
It seems to me that the meaning of the saying "If you can't beat 'em, join 'em" is cropping up here... but perhaps we can "beat 'em", then they would join us... and the Aussies :D
Hatch End A Captain (Hillingdon League)
Controller (Hillingdon League)

Sean Hewitt

Post by Sean Hewitt » Fri Jun 29, 2007 9:56 am

Greg Breed wrote:Sean: If everyone drove a Lada but you could have a Porsche for the same amount you'd go with the Porsche... wouldn't you? Wouldn't everyone?
It seems to me that the meaning of the saying "If you can't beat 'em, join 'em" is cropping up here... but perhaps we can "beat 'em", then they would join us... and the Aussies :D
This has nothing to do with which system is best. I think what I'm trying to say (perhaps badly) is that politically there is no chance, in my opinion, of getting the majority to be in favour of change to a system that no-one else uses and which the majority will not understand.

Any change may be tricky, but change to ELO is sellable in my opinion. Either that, or fix the current ECF system. And for thatm the ECF would have to admit that it is broken!

User avatar
Robert Jurjevic
Posts: 207
Joined: Wed May 16, 2007 1:31 pm
Location: Surrey

Post by Robert Jurjevic » Fri Jun 29, 2007 11:23 am

Sean Hewitt wrote:Any change may be tricky, but change to ELO is sellable in my opinion. Either that, or fix the current ECF system.
Yes, it looks like it would be easier to convince people to switch to Élo rather than to Glicko-2 or even ÉGS (which is Élo systems scaled to ECF grades and with 'k=1/2').

In order to keep present ECF grade scale I...

suggested scaling grades so that a difference of 25 (not 200) grading points in chess would mean that the stronger player has an expected score of approximately 75%.

...while originally Élo...

suggested scaling grades so that a difference of 200 (not 25) grading points in chess would mean that the stronger player has an expected score of approximately 75%.

...the 25 point difference gives you ÉGS and the 400 points difference gives you Élo English (National) Rating, which we could call ÉER.

Élo original (i.e. ÉER) formulas are...

Code: Select all

d = r1 - r2;
g = 400; 
p = 1/(1 + 10^(-d/g)); 
s1 = r1 + k*(q - p); 
s2 = r2 + k*((1-q) - (1-p)) = r2 - k*(q - p); 

...where 'r1' is rating of player 1, 'r2' rating of player 2, 'd' the rating difference, 'g' a constant equal to '400', 'q' is actual and 'p' expected performance of player 1 (in the range between 0 and 1), 's1' a new rating of player 1, 's2' a new rating of player 2...

...while ÉGS formulas are...

Code: Select all

d = r1 - r2;
g = (25*Log[10])/Log[3]; 
p = 100/(1 + 10^(-d/g)); 
s1 = r1 + k*(q - p); 
s2 = r2 + k*((100-q) - (100-p)) = r2 - k*(q - p); 

...where 'r1' is grade of player 1, 'r2' grade of player 2, 'd' the grade difference,'g' a constant equal to '(25*Log[10])/Log[3]', 'q' is actual and 'p' expected performance of player 1 (in the range between 0 and 100), 's1' a new grade of player 1, 's2' a new grade of player 2...

FIDE uses...

'k=16'

...for masters and...

'k=32'

...for weaker players, and this has been designed for grading 'm' times per season (after each tournament).

If ECF wants to use ÉER and grade only once a year FIDE's 'k' factors would have to be multiplied by 'm', and assuming that 'm=3' (i.e., 30 games which ECF players play on average in one season roughly equal three tournaments of 9 games), ÉER's 'k' factors would be...

'k=48'

..for masters and...

'k=96'

...for weaker players.

Of course, grading once a year would not be so good as say 5 times a year as obviously grades would not reflect as much the true chess abilities of the chess players, especially for those players whose chess ability change rapidly.

We can see that ÉGS's 'k=1/2' corresponding to FIDE's 'k=32' for weaker players...

Code: Select all

ClearAll[kf, qf, q, p, k, m]
Solve[{kf*(qf - pf)*200/25 == k*m*(q - p), qf - pf == (q - p)/100}, {m}]
{{m -> (2*kf)/(25*k)}}
...means that according to ÉGS 'm' is taken to be approximately equal to 5, i.e. that it is assumed that an average ECF player plays approximately 45 (5*9) games per season (FIDE would rate approximately 5 times per year)... but as 30 games per season for an average ECF player looks more realistic... 'm=3' and factors...

'k=48'

..for masters and...

'k=96'

...for weaker players...

...look more plausible.

Of curse, even with ÉER (especially if grading is done once a year) ECF would have problem with grade deflation caused by rapidly improving juniors and a problem of how much one's grade should be trusted (the grades of all payers should not be trusted equally).
Robert Jurjevic
Vafra