Nick Grey wrote: Grey, Nicholas D
1793
99074 in the top 100,000
1256 in ENG list.
By comparison, FIDE's Elo method has you at 1745 for a world rank of 72568 and an English rank of 1042
Nick Grey wrote: Grey, Nicholas D
1793
99074 in the top 100,000
1256 in ENG list.
Bad news for some players though - a quick perusal of ENG titled players shows a few very large drops, e.g.MartinCarpenter wrote:If the Universal people are doing a total recalculation based purely on the raw results of the games it could well produce a non trivial improvement in terms of accuracy.
Code: Select all
Levitt, JP 2404 FIDE, 2143 URS
Martin, AD 2364 FIDE, 2194 URS
Wicker, KJ 2275 FIDE, 1762 URS
Kenworthy, G 2249 FIDE, 1633 URS
Bennett, GH 2320 FIDE, 1439 URS
It looks as if they have some problems with player identification. I don't think Kevin Wicker has played in recent times either.Ian Thompson wrote: According to URS Bennett played 7 games in December; FIDE has no record of this. I thought he stopped playing about 20 years ago.
I note IM Mark Condie languishing in 181,103rd place with a rating of 1532.Ian Thompson wrote:Bad news for some players though - a quick perusal of ENG titled players shows a few very large drops...
With the prospects of an exciting new rating system, they haven't done basic plausibility tests. With the exception of those awarded titles as prizes, there should not be titled players with low ratings.Alistair Campbell wrote: I'm intrigued at the thought process behind this calculation.
Plausibility? You do talk utter rubbish sometimes, Roger. Just because a GM is 81 years old doesn't mean he or she has gone ga-ga. He was still playing rated games just over two years ago, with those games appearing in the November 2014 FIDE rating list. Rated at 2243, so no, not at all plausible. More likely he is just inactive or retired. The URS system clearly doesn't cope well with that.Roger de Coverly wrote:The lowest GM is Serbian Stanimir Nikolic at 2013, which as he was born in 1935 has an air of plausibility.
They haven't done what they said they would do in their FAQs:Roger de Coverly wrote:they haven't done basic plausibility tests.
Doing what they said they would do would deal with long inactive players. As they haven't done that, the obvious questions are how have they calculated a rating for anyone who has played 0 games in the last 6 years, and why don't all such players have the same (arbitrary) rating, if they are to have a rating at all?http://universalrating.com/faqs.php wrote:Players who have not played for more than a year will still have their rating recalculated as new data comes in, but are considered by the system to be Inactive. Inactive players are removed from all published lists on the URS™ website.
My theory would be that the data has become contaminated in some manner, either through the identity codes being scrambled or some test data remaining in the live system. Once FIDE and the ECF started to publish the detailed results on their respective websites,it greatly increases the credibility of the calculations.Ian Thompson wrote: As they haven't done that, the obvious questions are how have they calculated a rating for anyone who has played 0 games in the last 6 years, and why don't all such players have the same (arbitrary) rating, if they are to have a rating at all?
There's a website containing both the data and what little they are prepared to reveal about the calculation methods and how they should be interpreted.E Michael White wrote:I suspect all this is a lot of nonsense but which are the best links to read for the basis and final ratings ?
Elo's original formulation, as I understand it, was that a game of chess between two equally ranked players could have three results although he never came up with a model for draw percentage. He went on to observe that for players with different ratings, the results would be tilted towards success for the higher rated player from which he built the "actual - expected" model for modifying ratings.He added that a purely random ratings system would only predict the winner of a decisive game 50 percent of the time, while a perfectly accurate system would predict the winner of a decisive game 100 percent of the time. His goal for URS is "75-80 percent," which is better than current prediction systems according to him.
I don't understand this point. Not only are you assuming that the Gap is a positive difference, it does not appear to consider Adams being inactive in Rapid and Blitz. In Rapid, the current FIDE rating has Adams as 2741(i) and Short as 2744, so the gaps are somewhat bizarre. I know it has various weighting allowed for past performances and ratings at the time, but you are almost suggesting that the expected score is similar to a rapid match Short's 2658 vs Adams's 2741.Ian Thompson wrote:Are they not, effectively, giving you Rapid and Blitz ratings. Take Adams and Short, for example. Their ratings and gaps are:Roger de Coverly wrote:There's an underlying historic premise in rating systems that two players of the same rating should have a 50-50 chance. In that context I really don't see what the Rgap and Bgap are trying to achieve.
So Adams is 58 points better at classical time limits, 83 points better at rapidplay and 92 points better at blitz.Code: Select all
PlayerName URating RGap BGap Adams, Michael 2730 29 78 Short, Nigel D 2672 54 112
If the method had been disclosed, you could then calculate Adams expected score against Short at each of these time limits.