What are the new conversion formulae?

General discussions about ratings.
Sean Hewitt

Re: What are the new conversion formulae?

Post by Sean Hewitt » Fri Sep 11, 2009 7:58 pm

Roger de Coverly wrote: I'm unconvinced there was much. They drew a graph, noted that it didn't produce the results they expected and that if they took 80% of the original values, the fit would be better. Realising that everyone would laugh if they made Mark and Keith 190 players, they added back about 40 to 50 points.
That's rather disengenuous, as you well know. There was alot of modelling done by me, and then by others. The graphs came after the modelling to represent what had been found.

Given the amount of time I spent on it personally, I wish I had taken your approach and just drawn the graph! :lol:

Roger de Coverly
Posts: 21315
Joined: Tue Apr 15, 2008 2:51 pm

Re: What are the new conversion formulae?

Post by Roger de Coverly » Fri Sep 11, 2009 8:08 pm

Sean Hewitt wrote:There was alot of modelling done by me, and then by others
If I understood it correctly, your modelling was to run the new player recursion routine against a season's data and compare the results to the actual grades. This confirmed the non-linear effect as seen in Howard Grist's s-shaped graphs. Any more particularly on the dynamics over time? The Scots after all found much the same non-linear effect and concluded their system didn't need changing.

Sean Hewitt

Re: What are the new conversion formulae?

Post by Sean Hewitt » Fri Sep 11, 2009 8:53 pm

Roger de Coverly wrote: If I understood it correctly, your modelling was to run the new player recursion routine against a season's data and compare the results to the actual grades.
You understanding is not correct. I don't have access to the new player recursion so wasn't able to use that.

I took all results for a season and worked out the a ranking list for all players irrespective of ECF grade based purely on those results and the ECF grading formula. I also established how many points below the top players performance each and every other players performance was.

One point I made that was it irrelevant (the exception being negative grades) whether the top players were graded 250, 550 or 2550. It was the differential in grades that was important.

We now have differentials in adult grades which are broadly in line with results - and that's something which must be sensible and right.

Brian Valentine
Posts: 577
Joined: Fri Apr 03, 2009 1:30 pm

Re: What are the new conversion formulae?

Post by Brian Valentine » Fri Sep 11, 2009 9:02 pm

Sean,
Thisis interesting. Are you suggesting the work you did is indeed the mysterious " mathematical Modelling" referred to in my quote?

The modelling, in my reading, implies that explains the results you demonstrated. I think Roger and I are concerned that this stretch may have always been there (more or less). The "mathematical modelling" quote implies there is a theory behind the effect increasing. I have trouble replicating this effect.

Roger de Coverly
Posts: 21315
Joined: Tue Apr 15, 2008 2:51 pm

Re: What are the new conversion formulae?

Post by Roger de Coverly » Fri Sep 11, 2009 9:34 pm

Sean Hewitt wrote:I took all results for a season and worked out the a ranking list for all players irrespective of ECF grade based purely on those results and the ECF grading formula. I also established how many points below the top players performance each and every other players performance was.
That's surely an equivalent process to that used for new player estimation. As Brian suggests there are a number of possible methods of processing a season of results which should converge to the same end list. At least one concern of just using one year's data is that it leaves out the lag /stabilisation effect of the inclusion of past year's results in the published list. Objectively it's also a list that you should compare not just to the begin season grades, but also to the end season grades.

So you have (begin season list) which has (actual results) applied to it to produce (end season list). The ranking approach is taking function(actual results). Obviously you can compare the output of this function to both (begin season list) and (end season list) but you wouldn't expect them to be identical.

The difference to the published list would also contain lag and player inconsistency effects. As far as I recall you got a tighter range between top, middle and bottom than the published list. That's also indicative of non-linearity - 140 scores 60 % against 130 who scores 60% against 120 who scores 60% against 110, but 140 doesn't score 80% against 110. An effect seen in both the Grist data and the Scottish data.

Sean Hewitt

Re: What are the new conversion formulae?

Post by Sean Hewitt » Sat Sep 12, 2009 8:47 am

Brian Valentine wrote:Sean,
Thisis interesting. Are you suggesting the work you did is indeed the mysterious " mathematical Modelling" referred to in my quote?

The modelling, in my reading, implies that explains the results you demonstrated. I think Roger and I are concerned that this stretch may have always been there (more or less). The "mathematical modelling" quote implies there is a theory behind the effect increasing. I have trouble replicating this effect.
Hi Brian,

No my work is not the mathematical modelling referred to previously. That was done by the grading team after I had identified the problem. I was not involved (primarily because the grading team did not want to believe my findings).

Essentially what happened was that I was approached in 2006 to see if I could do some work on the grading system as some believed that they had observed the grading system had "gone wrong" and was no longer working as it should. I asked for and received all of the individual result data from the previous season.

I then extracted games where the following applied:-

1. The player had a published SP grade in 2005
2. Only SP games played between 1/6/05 and 31/5/06 against players in the sample group would count
3. The player had a minimum of 5 games in the sample
4. The players games yielded a score between 15% and 85%
5. Steps 4 and 5 were repeated until all criteria were met

The sample produced 6302 players who played 61375 games between them.

I then used an iterative process to work out grades for these players based only on games within the sample group. The calculation process followed the ECF system with 2 exceptions. Firstly, juniors did not receive a junior supplement. Secondly, the 40 point rule was not applied )as it is not appropriate for this kind of iteration).

The results were stark. There was stretching or deflation which could be measured by the approximation Perf = Grade * 0.7752 + 48.7816. Secondly, when looking at the number of games played, it didn't matter whether the player played 5 ; 10 ; 15 ; 20 ; 25 or 30 games the stretch was the same for all cases.

I then noticed (purely by accident really) that if you applied the above formula to all grades uniformly and then looked at FIDE ratings, the old formula of ECF * 8 + 600 = FIDE worked well for players graded above 130 although it probably needed to be + 700 which at the time I took to be inflation in the FIDE rating system (also bear in mind that my grading solution produced new grades that were lower than the ECF's new grades).

Image

I identified two causes to this stretch. Firstly ungraded players, whose performance in a small number of games is not considered to be statistically reliable enough to warrant publishing a grade, yet whose unreliable grading performance is used to calculate the grades of others! I therefore suggested that games against ungraded opponents should not be graded. To which one member of the graded team replied "We can't do that. How would we charge game fee?!"

The second cause was juniors. Of 927 juniors with a standard play grade in the 2006 list, 265 increased their grade (compared to 2005) by more than 10 points and 100 increased their grade by 20 points or more. These juniors tended to bemore active than average, multiplying the effect of their under-gradedness. I therefore looked at the junior supplement for each age and found it to be totally inadequate. Indeed, I could find no correlation between age and rate of improvement at all (as others far more well versed in junior chess have subsequently observed). I hypothesised at the time that "This could be due to rising numbers of junior only tournaments where the output is not verified by the participation of established, statically graded players. It seems to me that junior players are likely to be having a serious deflationary effect upon the list as a whole and the best solution appears to be to treat them as new players each year, as we are doing now with players with negative grades." I must admit that I had forgotten that I said this!!

I did understand at the time that a small minority of players like to calculate their grades on an ongoing basis. I therefore suggested, to minimise the disruption to them, that the above only applies to improving juniors and suggested (arbitrarily) that only juniors increasing in strength by 10 points have this treatment applied to them.

As I say, the grading team simply did not want to believe that there was any basis in what I said and published a statement on the ECF website to that effect. I understand that they went off modelling themselves for 18 months to disprove what I had said. I obviously dont know what they did as I wasn't involved at all but I believe they used 5 years data (before then individual results were not reported) to try to disprove what I had found but find instead their results mirrored my own.

Hope that history helps.
Roger de Coverly wrote:
Sean Hewitt wrote:I took all results for a season and worked out the a ranking list for all players irrespective of ECF grade based purely on those results and the ECF grading formula. I also established how many points below the top players performance each and every other players performance was.
That's surely an equivalent process to that used for new player estimation. As Brian suggests there are a number of possible methods of processing a season of results which should converge to the same end list. At least one concern of just using one year's data is that it leaves out the lag /stabilisation effect of the inclusion of past year's results in the published list. Objectively it's also a list that you should compare not just to the begin season grades, but also to the end season grades.
Its equivalent in that they should produce the same result if the same methodology is used. But I excluded high and low scoring players from my iteration (because it was obvious that they would infect the results) whereas it seems the ECF did not. And I still dont know precisely how they calculate initial estimates for ungraded players!
Roger de Coverly wrote:So you have (begin season list) which has (actual results) applied to it to produce (end season list). The ranking approach is taking function(actual results). Obviously you can compare the output of this function to both (begin season list) and (end season list) but you wouldn't expect them to be identical.
I agree that you wouldn't expect the lists to look the same. But more importantly, you shouldn't expect in your wildest dreams that the vast majority in the list would decline in grade, and that was what was actually observed.
Roger de Coverly wrote: The difference to the published list would also contain lag and player inconsistency effects. As far as I recall you got a tighter range between top, middle and bottom than the published list. That's also indicative of non-linearity - 140 scores 60 % against 130 who scores 60% against 120 who scores 60% against 110, but 140 doesn't score 80% against 110. An effect seen in both the Grist data and the Scottish data.
Not something I looked at Roger. I would have done if I had been asked to try to come up with a fix though!

E Michael White
Posts: 1420
Joined: Fri Jun 01, 2007 6:31 pm

Re: What are the new conversion formulae?

Post by E Michael White » Sat Sep 12, 2009 9:09 am

Sean Hewitt wrote: ,,,, you shouldn't expect in your wildest dreams that the vast majority in the list would decline in grade .....
ECF grading has never featured in my wildest dreams !

There are very interesting points in your survey Sean.

Roger de Coverly
Posts: 21315
Joined: Tue Apr 15, 2008 2:51 pm

Re: What are the new conversion formulae?

Post by Roger de Coverly » Sat Sep 12, 2009 10:20 am

Sean Hewitt wrote:I then extracted games where the following applied:-

1. The player had a published SP grade in 2005
2. Only SP games played between 1/6/05 and 31/5/06 against players in the sample group would count
3. The player had a minimum of 5 games in the sample
4. The players games yielded a score between 15% and 85%
5. Steps 4 and 5 were repeated until all criteria were met

The sample produced 6302 players who played 61375 games between them.
It's been my experience and probably Brian's as well that when you build a model of a complex system containing unknown interactions that what you leave out can have as much impact on the results as what you put in, not least when you get an unexpected result. 6302/61375 should be compared to the grading list totals of usually around 11500/100000. So it's based on only about 60% of the available data and for one year only at that. Also missing are the impacts of the 40 point rule, junior increments and prior year results all of which could and probably have modified the published grades.
Sean Hewitt wrote:And I still dont know precisely how they calculate initial estimates for ungraded players!
We know the general outline that they first calculate an estimate based on games against graded players and then loop until convergence. Their problems seemed to be how to ensure sensible convergence for players with very high or very low percentage scores when the 40 point rule can imply a range of possible solutions.
It seems to me that junior players are likely to be having a serious deflationary effect upon the list as a whole
The problem with this assertion was that it's difficult to see strong evidence of this in recent published grades. Back in the sixties, it was spotted that the number of players above, say 200, was decreasing year by year without it being apparent that they were getting weaker. So a blanket junior adjustment of 5 later 10 was introduced to counter this. In those days a very high percent of the active and top players were under 25 of course. Fast forward to the late eighties and the ability to perform database analysis on the published grades, the investigations suggested that the junior increment was now too high and was having a (modest) inflationary effect as observed from the mean and median in consecutive lists. This gave rise to the age-related scale. The then BCF board even discussed and rejected taking a few points off everyone's grade.

Sean Hewitt

Re: What are the new conversion formulae?

Post by Sean Hewitt » Sat Sep 12, 2009 12:33 pm

Roger de Coverly wrote:
It seems to me that junior players are likely to be having a serious deflationary effect upon the list as a whole
The problem with this assertion was that it's difficult to see strong evidence of this in recent published grades.
Roger, I think this is the evidence that small age related adjustments of 4-10 points was flawed, thus creating hundreds of very active under-rated juniors spreading pain and misery throughout the grading system :-
Of 927 juniors with a standard play grade in the 2006 list, 265 increased their grade (compared to 2005) by more than 10 points and 100 increased their grade by 20 points or more
Or compare 2006 grades to my 2006 "Sean" grades, we see that adult grades increase in a predictable way but junior grades (with the ECF junior increment included) are all over the place. Some juniors being vastly over-graded some vastly under-graded. This is true irrespective of age or grade. The more active players are under-graded whilst the less active players are over-graded by the then ECF system.

Image

Roger de Coverly
Posts: 21315
Joined: Tue Apr 15, 2008 2:51 pm

Re: What are the new conversion formulae?

Post by Roger de Coverly » Sat Sep 12, 2009 12:56 pm

Sean Hewitt wrote:Roger, I think this is the evidence that small age related adjustments of 4-10 points was flawed, thus creating hundreds of very active under-rated juniors spreading pain and misery throughout the grading system :-
No one seems to have measured how much or how little effect this is causing. A simple way of measuring would be to remove the junior increment entirely. Recalculate to end season 1 using actual grades and results. Players who did not play any juniors would be unaffected. Recalculate to end season 2 using actual results and the modified end season 1 grade. Players who didn't play juniors would be affected only if their opponents had faced juniors. Repeat for season 3. You then have a straight comparison between "live" grades and "no increment" grades which gives you a measure of the effect of junior increments. You might find that the existing increment is worth a point a year at the 120 mark but no impact at the 220 mark. You might find limited effects on adults in general but disproportionate impacts on juniors.

Sean Hewitt wrote:
Or compare 2006 grades to my 2006 "Sean" grades, we see that adult grades increase in a predictable way but junior grades (with the ECF junior increment included) are all over the place.
Expressed another way this is says that whilst the performances of adults are predictable, the performances of juniors are not. Which pushes the "estimate of an unknown random variable" underpin of rating systems to its limit.

E Michael White
Posts: 1420
Joined: Fri Jun 01, 2007 6:31 pm

Re: What are the new conversion formulae?

Post by E Michael White » Sat Sep 12, 2009 2:08 pm

Roger de Coverly wrote:We know the general outline that they first calculate an estimate based on games against graded players and then loop until convergence
Roger the initial estimate does not have any effect as I showed previously. The grading team might just as well save time and money, estimate 0 or 200 and the iterative approach will give the same results. If you start with 0 the grades inflate with successive (groups of) iterations or if you start with 200 deflate until they reach the convergence values. These values are identical to solving the linear equations by other methods for example matrix inversion.
I pointed this out to Howard Grist and the grading team about 2 years ago but they were not interested. The amusing point is that inverting matrices for the numbers involved, which is now practical with a decent matrix processor, probably involves the computer doing a behind the scenes iterative routine to find the inverse matrix. So the two approaches are even more similar than might be thought but the ECF version more costly at least in terms of development time.
Roger de Coverly wrote:Back in the sixties, it was spotted that the number of players above, say 200, was decreasing year by year without it being apparent that they were getting weaker. So a blanket junior adjustment of 5 later 10 was introduced to counter this. In those days a very high percent of the active and top players were under 25 of course. Fast forward to the late eighties and the ability to perform database analysis on the published grades, the investigations suggested that the junior increment was now too high and was having a (modest) inflationary effect as observed from the mean and median in consecutive lists. This gave rise to the age-related scale. The then BCF board even discussed and rejected taking a few points off everyone's grade.
You need to consider population changes as a result of WW2. There was a drop in births during years 1939/1945 so those who might have been emerging as good 25 year olds mid 1960 were few and far between. In 1960/5 there was a higher proportion aged over 60 and at that time <1000 players graded over 140.

I have always been sceptical about the suggestion that top players grades were inflated in the 1980s and put it down to the BCF officials at the time not able to accept that the new generation of young players were more professional and simply better. In 1989 for example if I remember correctly Short was no 2 and Speelman no 5 in the FIDE world rankings and UK GMs were appearing like rabbits conjured from hats. I expect age related increments came about by too many insurance related people being involved in the grading process. The first thing an insurance person does when faced with difficulties is to introduce age related loadings to up car, house or life insurance. A better approach would have been to attempt activity related increases during the first few years a player plays until they reach 160.
Last edited by E Michael White on Sat Sep 12, 2009 7:01 pm, edited 1 time in total.

Roger de Coverly
Posts: 21315
Joined: Tue Apr 15, 2008 2:51 pm

Re: What are the new conversion formulae?

Post by Roger de Coverly » Sat Sep 12, 2009 3:04 pm

E Michael White wrote: I have always been sceptical about the suggestion that top players grades were inflated in the 1980s
I believe they plotted the distribution of grades and got a normal-shaped curve. They noticed that the mean and median seemed to be shifting upwards and changed the +10 age related scale to an age dependent one. So it wasn't the top rated players there were the issue. Even back then, the international rating was the more important one for the titled players.

Nigel Short was 255 in 1988, Jon Speelman 256, John Nunn 252, Murray Chandler 252, Mark Hebden 235, Mickey Adams 240 (age 16!), Keith Arkell 230, Julian Hodgson 235, Matthew Sadler 221 (age 14!), Glenn Flear 236. Junior increment was added after publication in those days.
E Michael White wrote:A better approach would have been to attempt activity related increases during the first few years a player plays until they reach 160.
There are lots of changes that could be made to the process of new player estimation and allowing for expected lag, I don't think what they have come out with is anywhere near a correct or even sensible solution.

Brian Valentine
Posts: 577
Joined: Fri Apr 03, 2009 1:30 pm

Re: What are the new conversion formulae?

Post by Brian Valentine » Sat Sep 12, 2009 3:15 pm

Sean,
Thank you for the historical record. It gives me more to think about!

Brian Valentine
Posts: 577
Joined: Fri Apr 03, 2009 1:30 pm

Re: What are the new conversion formulae?

Post by Brian Valentine » Sat Sep 12, 2009 3:22 pm

Michael,
A couple of remarks from one of those insurance chappies.
I'm pleasedto see that the matrix solution might be possible it would certainly be sounder than the iterative solution. If so the grading team should consider using it (although they really ought to be changing theapproach for juniors in particular).

I think the point of the first estimate of the grade used in the iterative approach is that the iterative approach is not guaranteed a unique solution. A reasonable first guess should get the method to the most sensible solution. I put an oblique comment on this point in my paper on the matrix method.

E Michael White
Posts: 1420
Joined: Fri Jun 01, 2007 6:31 pm

Re: What are the new conversion formulae?

Post by E Michael White » Sat Sep 12, 2009 4:17 pm

Roger de Coverly wrote:I believe they plotted the distribution of grades and got a normal-shaped curve. They noticed that the mean and median seemed to be shifting upwards and changed the +10 age related scale to an age dependent one. So it wasn't the top rated players there were the issue. Even back then, the international rating was the more important one for the titled players.
Well if they expected the mean and median to stay the same that would be silly. They should have been testing to see if they had increased enough to compensate for players getting stronger.
Roger de Coverly wrote:Nigel Short was 255 in 1988, Jon Speelman 256, John Nunn 252, Murray Chandler 252, Mark Hebden 235, Mickey Adams 240 (age 16!), Keith Arkell 230, Julian Hodgson 235, Matthew Sadler 221 (age 14!), Glenn Flear 236. Junior increment was added after publication in those days.
I cant see anyone sensible arguing against those grades. Allowing for age, the higher rated had exceptional talent. I guess the graders compared players with Penrose, previously at age 30 being about 240, and didnt realise how much stronger the next cohort were.

In the U18 event at the 1969 British, 87 players took part. With that level of interest and opportunity to play, strong players were certain to emerge in the 70s and 80s.. I guess the grading team back then missed a trick or two.