Stewart Reuben wrote:My reason for thinking only one rating list a year leads to deflation is because it has led to deflation.
That is clearly one of your less convincing explanations !
Stewart Reuben wrote:Of course the bonus shold be individualised to each person, not just juniors. A player has gone up 20 points in the year, give him a bigger bonus than one who has only gone up 10.
Some sort of predictor/corrector approach seems ok to me as long as it is well defined, uses the last 3 or 4 base grades, is kept under review and applies less fiercely in the case of an underactive player. We dont want any sort of grader discretion; arbiter discretion causes enough problems as it is.
Stewart Reuben wrote:But if you had say quarterly grading lists and continued to roll over the last 30 games, then the problem would be far less acute. In addition it would help popularise chess, lead to more activity as internationally, in the US and London when they had monthly lists.
Players may like to see more frequent grading lists for other reasons but they wont necessarily fix or ease deflation.
In addition to looking at the first layer of games ie the junior's immediate opponents, you need to consider the medium/longer term effect that those opponents have and the opponents of those players. Your previous comments suggest that you think a player would reach his true level quicker and so cause less deflation. In reality what happens during quater 1 is that not only is any increase/decrease crystallised into the juniors grade but also into his opponents grades. If the 1st quarter opponents were previously accurately graded they will become undergraded for quarter 2 and cause similar deflation as the junior would have done under annual grading. There are then many more undergraded players carried to the 2nd quarter to cause further deflation.
Whether the long term effect is deflationary/inflationary/neutral depends on the pattern of play and how active the players are. It can go either way. The last thing that is needed is a major change like junior increments which simply does not do what it set out to do.
Stewart Reuben wrote:What I wrote about Sonas was ambiguous. I meant that Sonas said a linear relationship as opposed to using Gaussian distribution was statistically more valid. He did not comment on whether the averaging system over the past year used in England is better or worse than the rolling average (in a certain sense) used in FIDE.
I realised you were referring to the metric function which calculates expected %s. If I remember correctly, what I read in Sonas's report was that he used approx 260,000 historic games and recalculated grades using various assumptions for the metric expectation function and k values. In his view using a linear metric in combination with the FIDE method of k-factors gave the closest fit to results. It would be wrong to use this statement as the basis for saying a linear metric is the best to use with the ECF averaging method as has been implied by both yourself and others on this forum.
Applicability of Sonas's conclusions to ECF games can be questioned on statistical grounds because his data relates primarily to high level tournaments and has a higher proportion of all play alls than corresponding ECF games would have. Assumptions of randomness are also questionable as FIDE pairing methods in large swisses affect randomness, whereas many ECF games originate from League matches. Also questionable is the assumption that more closely predicting results is an indication that the system is working better so that ranking aspects of grading lists will be more accurate.
Maybe I should have put this in the grading part of the Forum