I didn't say a statistical analysis approach is not useful, but rather make the points:Maxiton said:But if his math is mistaken and some of his assumptions misplaced, how does that invalidate Vayer's essential thesis that statistical analysis can be applied to athletic performance to reveal that which remains opaque to conventional drug testing: continued cheating in sport?
(i) Why do analysis on possible/probable flawed calculated numbers, when just doing it on actual climbing times data removes the vast bulk of the errors/assumption? The Top 200 ADH times from that Finnish forum post is much more insightful if you ask me (and even with that there is debate over some of the times)
(ii) How does a statistical analysis of climbing times (or W/kg estimates thereof) possibly provide any insight into the vast bulk of riders who are not contesting the climb, but rather just finishing the stage?
Maxiton said:And furthermore, how do his presumed errors indicate that his sole motive is to make money?
It doesn't, publishing does though, although you're right in that the authors rarely make much from such exercises.
Well I can't subscribe to your entire theory as when there's a choice between a conspiracy and a screw-up, 99% of the time it'll be a screw-up. But you have no argument from me on this latter point. Have seen it first hand in a number of countries and sports.Maxiton said:Those who rock the boat are usually confined to the lower decks.
Statistical analysis on say baseball/cricket players or basketballers or footballers or track and field athletes is not the same, because for the most part, all players in these sports are seeking to perform at a high standard all the time (save cheating for match fixing).Maxiton said:Statistical analysis is the least assailable method of finding patterns of cheating in sport.
In cycling though, only those for whom a best performance finish atop a mountain matters are going to put in best effort on a race deciding climb.
Such an approach therefore totally ignores the majority and hence as a diagnostic for tracking down dopers, is not all that helpful. We are already onto the leading contenders, so what extra insight is it adding?
I'm arguing that one should simply look at the numbers we know to be correct (i.e. the actual climbing times) or at least most likely to be the least inaccurate, and not bring in unnecessary error by making W/kg calculations.Maxiton said:Where the numbers are wrong, plug in more accurate numbers for an improved model and better results When used in conjunction with a serious testing regime, Vayer's method (also long endorsed by Greg Lemond) could help lead to a truly healthy sport.
Secondly, while the maths could be improved, the data required to improve the precision of estimates does not exist. The biggest flaw in presenting these W/kg values is not presenting them with error bars due to the unknowns and assumptions made, because when you do that, you then realise the futility of the exercise.
Maxiton said:It seems to me that this or that error in math is a straw man here, and that only those who have a vested interest in maintaining the status quo could object to Vayer on that basis.
So if I like my maths and data to be correct, I support the status quo? or doping? What nonsense.
If you want to prove something, or change people's understanding, it helps to have your maths right. Why ruin your argument with sloppy maths, especially when the maths is not even required to make your point? i.e. just use climbing times. They are more than sufficient for the task.
I would argue that by introducing a large level of uncertainty by not having tightly known input assumptions and maths, it does more damage to "the cause" by providing wiggle room for those that seek it, and who could blame them for walking though that particular door?