Mike Gunn wrote: ↑Sat May 30, 2020 7:27 pm
Now I must admit that although I have downloaded Ken Regan’s papers I haven’t (yet) got round to reading them, so I’m not really in the area of being able to do a proper critique of his method, but when I heard him going on about Z values in Tuesday’s webinar I realised that it is all based on the area under the tail of a normal (or similar statistical) distribution. These calculations are all very well when it comes to truly random physical events (coin tossing etc) but perhaps not to events based on human decision taking.
I've reproduced just the last paragraph of Mike's earlier post although the whole is worth a read. Ken Regan's presentation on Friday mentioned, and I'm here mainly addressing those who didn't take part in the webinar, a game between Carlsen and Anand. Some way through this game, Anand decided that the correct plan was to advance his a-pawn, which he did on moves n, [n+1] and [n+2]. It was actually the wrong plan, as a result of which all three moves were identified as blunders.
The essence of a normal distribution is that it relies on successive events being random or independent, which should certainly be the case if one is tossing a coin. In the Carlsen-Anand case, successive events plainly weren't random - Anand's thinking at move n governed his decision at move [n+1] and one or both of these governed his decision at move [n+2]. Logic, rather than statistics, suggests that - all other factors being equal - someone who makes three independent blunders during a game is playing worse than someone whose makes one false assessment which results in three closely-related blunders. Ken Regan's solution was to attach a lower statistical weighting, determined empirically, to Anand's blunders at moves [n+1] and [n+2].
The Carlsen-Anand case was a particularly obvious case of a very common phenomenon with which most players will be familiar - that of following a previous line of thought when fresh thinking was required - but identifying such occurrences from a bare scoresheet strikes me as very difficult indeed. In those cases where one can't identify, one can't compensate. That indicates some variation from a normal distribution - I can't say how much, and my totally unscientific guess is 'not very much' - but it's typical of situations where the human factor complicates statistically-based decision-making.