• The Cycling News forum is looking to add some volunteer moderators with Red Rick's recent retirement. If you're interested in helping keep our discussions on track, send a direct message to @SHaines here on the forum, or use the Contact Us form to message the Community Team.

    In the meanwhile, please use the Report option if you see a post that doesn't fit within the forum rules.

    Thanks!

Power Data Estimates for the climbing stages

Page 111 - Get up to date with the latest news, scores & standings from the Cycling News Community.
Apr 20, 2012
6,320
0
0
Visit site
Alex Simmons/RST said:
I didn't say that, you did.

Even Velo says wind matters quite a bit, easily 5-10%. Yes the other factors less so and the estimates are less sensitive to those factors. But not "almost irrelevant".

These things matters even more so when inspecting one data point in isolation from all the others. Another point Velo makes in the links you provide.

And Velo also says the modelling to use is the Martin et al model, which is what I use - except I don't apply climb averages, rather I segment climbs to deal with variable gradients and wind vectors.

So thanks for the links, they nicely reinforce my point.
What mountain is in a straight line to the top again?

Dont the corners on a mountain make a headwind a tailwind, and reverse?
 
Fearless Greg Lemond said:
What mountain is in a straight line to the top again?

Dont the corners on a mountain make a headwind a tailwind, and reverse?

Here you go - for the flat eathers - not you

On a typical climb with a “perfect” tail wind, only about a third of the direction of the movement is going to be in the same direction as the wind cutting the effect down by two thirds. A perfect cross wind on the other hand, effectively means the rider is alternating from head to tail wind sections. Remembering the math above, the net effect is that a perfect cross wind will slow a rider down more than it helps them again resulting in an underestimate
.
 
Sep 29, 2012
12,197
0
0
dearwiggo.blogspot.com.au
Alex Simmons/RST said:
And Velo also says the modelling to use is the Martin et al model, which is what I use - except I don't apply climb averages, rather I segment climbs to deal with variable gradients and wind vectors.

So thanks for the links, they nicely reinforce my point.

lollercopters.

Even Velo says wind matters quite a bit, easily 5-10%

Good god man. Too much time watching Fox News and not enough time increasing your copy and paste buffer!!

Now it is true strong consistent head or tail wind can throw an estimate off by 5-10% or more. But as we can see from the 250 climbs above, this scenario looks like a rare event in Pro-tour races.

Here's the take away point:
Statistically the estimate <snip> “very likely” to be within +/- 2.7% (95% confidence interval). For perspective, the “gold standard” SRM power-meters are only reported to be accurate to within +/- 1-2% themselves.

The complete and utter irony that you would boast of doing segment by segment analysis as if you have instantaneous wind measurement - and it's at the level of the rider, inside the crowd and the buses and the motorcycles and other riders so getting a true reading. Don't tell me you're making ejumacated guesses as to the difference from where the wind was measured - you know 2+ km away, at 10m asl, then extrapolating it down into the crowd and what not? tsk tsk. Full disclosure please - or is this precious IP that cannot be released for fear someone will copy it?

And then post a graph that, whilst interesting, when added to all the other variables netts a total 2.7% error over 250 climbs analysed, saying, "see, you've proven my point!"

Goodness.

I am curious what your 95%CI analysis yields in terms of measured vs calculated power up climbs?
 
Aug 13, 2009
12,855
1
0
Visit site
Alex Simmons/RST said:
Even Velo says wind matters quite a bit, easily 5-10%. Yes the other factors less so and the estimates are less sensitive to those factors. But not "almost irrelevant".

You don't get it. Anything that adds context to the discussion, but might dilute the narrative, is "Almost irrelevant"
 
Alex Simmons/RST said:
Even Velo says wind matters quite a bit, easily 5-10%. Yes the other factors less so and the estimates are less sensitive to those factors. But not "almost irrelevant".

These things matters even more so when inspecting one data point in isolation from all the others. Another point Velo makes in the links you provide.

And yet when I and others call for more transparency in publishing power tap numbers, you and others argue that it won’t do any good.

Your own website publishes power curves that show that the line in the sand that Tucker draws is based on a reasonable belief that very high efficiencies, > 24%, are unlikely. There have been a couple of intriguing studies suggesting that some pros may have such high efficiencies. But no team is willing to be transparent about this, either.

The basic line taken by you, Coggan, Walsh, Sky, and most other teams and riders is:

1) climbing times are subject to too many confounding factors, particularly with regard to individual riders
2) we aren’t willing to exhibit the transparency necessary to provide the power tap numbers that would eliminate the confounding factors, and gather the population data that at least would enable us to make better estimates about how much cleaner the peloton may be now.

Supporters of this line repeat like a mantra that it takes positive tests, yet when once in a blue moon an Impey gets busted, he then gets off under extremely suspicious circumstances, and most people are fine with that, too.
 
Fearless Greg Lemond said:
What mountain is in a straight line to the top again?

Dont the corners on a mountain make a headwind a tailwind, and reverse?

The Peña Capeña is a straight line climb. Horner had a tail wind in 2013.

But agree. A twisting mountain will oscillate the wind. Ventoux would be a good example.
 
Aug 13, 2009
12,855
1
0
Visit site
Merckx index said:
And yet when I and others call for more transparency in publishing power tap numbers, you and others argue that it won’t do any good.

We know one guy who thinks it will do good

Greg LeMond said:
“It’s bull****. That’s bull****. Because if you can’t release your watts … they’re doing it right now,” he said of teams reviewing power data following the stage. “They’re looking at it right now, bottom to the top.

http://velonews.competitor.com/2013...or-froome-others-to-release-power-data_295268
 
Fearless Greg Lemond said:
What mountain is in a straight line to the top again?

Dont the corners on a mountain make a headwind a tailwind, and reverse?

Have I ever said it does any of the above?

My point is neither you nor I nor anyone else knows, and hence estimates need to account for this unknown by including an analysis of the errors, when what happens is figures are published with a level of unjustified precision.
 
Dear Wiggo said:
The complete and utter irony that you would boast of doing segment by segment analysis as if you have instantaneous wind measurement - and it's at the level of the rider, inside the crowd and the buses and the motorcycles and other riders so getting a true reading. Don't tell me you're making ejumacated guesses as to the difference from where the wind was measured - you know 2+ km away, at 10m asl, then extrapolating it down into the crowd and what not? tsk tsk. Full disclosure please - or is this precious IP that cannot be released for fear someone will copy it?

And then post a graph that, whilst interesting, when added to all the other variables netts a total 2.7% error over 250 climbs analysed, saying, "see, you've proven my point!"

Goodness.

I am curious what your 95%CI analysis yields in terms of measured vs calculated power up climbs?

Again, you miss the point. Nobody knows what the wind conditions are along the course. Unless you do, then it would be wise to provide an allowance for it in any estimate based on incomplete data.
 
Merckx index said:
And yet when I and others call for more transparency in publishing power tap numbers, you and others argue that it won’t do any good.
My argument is that we already know who the top professional riders are, and whether or not they have power meters, release their power data or estimates are made, it won't make a sod of difference to the anti-doping effort.

Merckx index said:
Your own website publishes power curves that show that the line in the sand that Tucker draws is based on a reasonable belief that very high efficiencies, > 24%, are unlikely. There have been a couple of intriguing studies suggesting that some pros may have such high efficiencies. But no team is willing to be transparent about this, either.
So teams that don't release data is my fault? I know you don't really mean to imply that but that's what it sounds like. But let's say you knew a rider's GME. Then what?

The post you are referring to says nothing about where the lines in the sand are, rather it simply demonstrates the nature of the relationship of GME, VO2max, fractional utilisation of VO2max, and just how blurry those lines in fact are.

Merckx index said:
The basic line taken by you, Coggan, Walsh, Sky, and most other teams and riders is:

1) climbing times are subject to too many confounding factors, particularly with regard to individual riders
2) we aren’t willing to exhibit the transparency necessary to provide the power tap numbers that would eliminate the confounding factors, and gather the population data that at least would enable us to make better estimates about how much cleaner the peloton may be now.
I can't and don't speak for Coggan, Walsh or Sky.

The points above are not my position.
As for 1), see i., ii., iii. & iv. below.
As for 2), see

My points are:

i. estimates of power from climbing times are subject to errors, some factors are not measured and are unknown and hence an error analysis should be provided with such estimates. Some unknowns are not overly significant, but some are. I consider that the estimates that are regularly published imply a false level of precision. This is a separate issue to how one then goes on to interpret such data.

ii. Now we really don't need power estimates when comparing the same climb over long periods, as the climb times alone are sufficient for the purpose (and come with a better level of precision). Then all you do is look at the trends and note variations from year to year of the group averages (which might be to do with race context, environmental conditions, level of doping impact etc) as well as relative performance of individuals in any given year.

iii. The reason power estimates are made is to see if comparisons can be made between different climbs. That's when the issue of precision of estimates becomes a little more problematic because you are now estimating power with some unknowns as inputs. The level of precision should be conveyed, that way we can then see how power output estimates are really ranges rather than a point value.

iv. It's very important not to place too much reliance on an individual data point, rather use of such data (e.g. climbing times) should be to examine the overall trends. Ross Tucker has gone to great pains to make this point, I think he uses the term "pixellation" but I might have the exact phrase wrong. Yet time and again people fall into the trap of being focussed on single data points/estimates. I'm not saying you do this, but rather I see people place more reliance on individual data points than they should.

v. Even if you had the rider's (or riders') data, it's still not going to make a difference to the anti-doping effort. Tell me, let's say a rider released all of his power meter data (or that we had precise estimates). Now what?

We already know the riders that should be targeted for doping control. They are pros and ride bikes. The real problem is with woefully inadequate testing and detection processes. No amount of power data or estimates will overcome that.


Merckx index said:
Supporters of this line repeat like a mantra that it takes positive tests, yet when once in a blue moon an Impey gets busted, he then gets off under extremely suspicious circumstances, and most people are fine with that, too.
I'm most definitely not fine with the Impey case, so let me dispel the notion that you infer I am. A read of my posts on the Impey threads should make that pretty clear, but then I am also not actually making the points you suggest I am, so I can't say whether of not there is a correlation between those that do and how they view cases such as Impey's.
 
Sep 29, 2012
12,197
0
0
dearwiggo.blogspot.com.au
Alex Simmons/RST said:
Again, you miss the point. Nobody knows what the wind conditions are along the course. Unless you do, then it would be wise to provide an allowance for it in any estimate based on incomplete data.

I think you are still missing the point.

Despite not knowing the wind conditions, the estimates, compared to the actually recorded data, is within +/- 2.7%. I consider the confounding factors accuracy mostly irrelevant when your overall accuracy is that close.

I notice too, despite the bragging about segment by segment analysis that you do not provide even a summary of your accuracy.

Telling.
 
May 2, 2013
179
0
0
Visit site
Dear Wiggo said:
I think you are still missing the point.

Despite not knowing the wind conditions, the estimates, compared to the actually recorded data, is within +/- 2.7%. I consider the confounding factors accuracy mostly irrelevant when your overall accuracy is that close.

I notice too, despite the bragging about segment by segment analysis that you do not provide even a summary of your accuracy.

Telling.

Well, I feel like rambling something vaguely coherent and mildly on topic.

One of the few major assignments I ever got perfect on in University was a fluid mechanics lab in 2nd yr mechanical engineering. They had this big apparatus with liquid going through a series of valves, fittings, and bends. Pressure, flow rate, fluid velocity, etc, was measured at different poitns along the flow.

All these years later, I can't remember exactly what it was we were supposed to calculate. Maybe the total head-loss (ie pressure loss) of the system from in to out? Anyway, it doesn't really matter for the example.

I did my calculations, and I got my answer. And then, I did something most undergrads don't bother with--all of the error propagation. Which was insanely lengthy. We're talking relatively simple equations, but they're about a mile long. What a pain. Pretty much every parameter had an error, that had to be propagated.

The result was that I had an overall error so great as to make the entire experiment worthless. By now it was about 5 am. Having stayed up all night writing a lab report on what was ultimately a useless experiment, I was ticked. And wrote my main conclusion that the lab TA's needed to learn how to set up a meaningful apparatus.

Having said all that, I think wiggo you are taking the right approach here. Instead of trying to come up with the uncertainty in each of the input parameters, we validate the model against actual, known readings.

We have various models. It seems Ferrari's is most favored, and the simplest. It seems that this model has been validated, repeatedly, to be within a certain percentage of the actual power. We're talking maybe +/- 5% accuracy, at the 95% confidence interval.

I could believe that.

Maybe one climb in 20, Horner catches a real sweet tailwind, and his estimated power ends up being significantly above his real power. I could believe this. BUT, overall, most of the time, the estimate will be within a few percent of the actual.
 
Apr 20, 2012
6,320
0
0
Visit site
Alex Simmons/RST said:
I think understanding the nature of errors in making estimates and pointing out that an unjustified level of precision is consistently reported is worthwhile.
Off course, estimates arent the Holy Grail.

That leaves us with what? Just the stopwatch? A stopwatch doesnt lie.

Here is a stopwatch:

Marco Pantani 21.19 min Pampeago Tesero 695 6.6K Giro dItalia 1999

Ryder Hesjedal 22.22 min Pampeago Tesero 695 6.6K Giro dItalia 2012

Alex Zülle 22.24 min Pampeago Tesero 695 6.6K Giro dItalia 1998

Gilberto Simoni 22.26 min Pampeago Tesero 695 6.6K Giro dItalia 1999

Roberto Heras 22.46 min Pampeago Tesero 695 6.6K Giro dItalia 1999

Off course Hesjedal had a tailwind there...
 
Fearless Greg Lemond said:
Off course, estimates arent the Holy Grail.

That leaves us with what? Just the stopwatch? A stopwatch doesnt lie.

Here is a stopwatch:

Marco Pantani 21.19 min Pampeago Tesero 695 6.6K Giro dItalia 1999

Ryder Hesjedal 22.22 min Pampeago Tesero 695 6.6K Giro dItalia 2012

Alex Zülle 22.24 min Pampeago Tesero 695 6.6K Giro dItalia 1998

Gilberto Simoni 22.26 min Pampeago Tesero 695 6.6K Giro dItalia 1999

Roberto Heras 22.46 min Pampeago Tesero 695 6.6K Giro dItalia 1999

Off course Hesjedal had a tailwind there...

You are right FGL

Luis Herrera 41.50 Alpe d'Huez 1987

Laurent Fignon 40.56/41.41 Alpe d'Huez 1991

Lance Armstrong 41.23 Alpe d'Huez 1999

Andy Schleck 42.07 Alpe d'Huez 2011

Alberto Contador 42.17 Alpe d'Huez 2011

Ivan Basso 43.16 Alpe d'Huez 2011

Greg LeMond 41.42/ 42.27 Alpe d'Huez 2011

Ryder Hesjedal 42.25 Alpe d'Huez 2011

There are two different sources for 1991.
 
pmcg76 said:
Sense of humour fail here. He implied below 41.00 was not possible clean so the logical follow on is above that time is possible clean. I think Ferminal is well aware of that and the arbitary drawing of a line was a joke on his behalf.
Sure, but why would Valverde be clean just because his time was possible clean?
 
Fearless Greg Lemond said:
Not a bad time at all, he could go for the GC next year, maybe training with Horner?

;)

But what are you exactly implying?

Haha, maybe he could do. Back to the future style there.

What am I implying? Simply that it is possible to cherry pick stats to show whatever you want to show. Thus why measuring individual years against each other is kinda pointless.

Looking at trends is more relevant.

For example, two periods of analysis for Alpe d'Huez. 2001-03-06 versus the Bis-Passport years 2008-11-13. No 2004 TT included.

In the top 50 times for Alpe d'Huez, 9 are from the early 00s era, 3 from the Bio-Passport era.

In the top 100 times for Alpe'd Huez, 24 are from the early 00s era, 6 from the Bio-Passport era.

In the top 200 times for Alpe d'Huez, 43 are from the early 00s era, 23 from the Bio-Passport era.

By taking 3 year samples, it lessens the impact of factors like wind, race tactics etc, even though it is possible that conditions were still more favourable over a 3 year period but less likely.

Taking into consideration that the Bio-Passport era times are primarily in the lower half of the ranking, this would indicate a more linear progression in times from those being set in the 80s. Of course there are outliers like Quintana but the overall trend can be discerned from the data shown.

As I said, this is still not definitive but a trend over a 3 year period is more relevant than comparing individual years. Also picking 1 climb does not necessarilly tell the whole story but Alpe d'Huez is still one of the most used climbs in the Tour.

Someone said Ventoux but it has been in the Dapuhine more and having personally climbed both mountains, I do think Ventoux is more exposed to the elements than Alpe d'Huez, and is also more linear in comparison to the the Alpe which switchbacks a lot of course.
 
Fearless Greg Lemond said:
Off course, estimates arent the Holy Grail.

That leaves us with what? Just the stopwatch? A stopwatch doesnt lie.
Which is precisely one of the points I was making when I wrote this:
Alex Simmons/RST said:
ii. Now we really don't need power estimates when comparing the same climb over long periods, as the climb times alone are sufficient for the purpose (and come with a better level of precision). Then all you do is look at the trends and note variations from year to year of the group averages (which might be to do with race context, environmental conditions, level of doping impact etc) as well as relative performance of individuals in any given year.


Fearless Greg Lemond said:
Here is a stopwatch:

...snip...

Off course Hesjedal had a tailwind there...
And then we are back to the issue of drawing a firm conclusion based on a single rider and climb data point.

Please try to divorce the notion of what you believe the data to tell you, from what it actually tells you.

It actually tells us he went fast that day. What it doesn't conclusively tell us is why. There are no doubt many possible and plausible reasons for that (including doping), none of which in isolation can be made conclusively.

Here, I found where Tucker makes this point:
So what we need to avoid is what I last year termed “Performance pixellation”, where you look so closely at a single performance, that one ‘pixel’, and then decide what the picture is. Taking a single climb, or even a single rider, and making sweeping judgments on the plausibility of performances goes BEYOND what this method and concept will allow.


But end of day, does any of this lead to a doping sanction? Does it mean we now know this rider should now be targeted when we didn't before?
I very much doubt it.

Which is my primary criticism - that it isn't actually helping to improve the anti-doping effort.

For sure analysing climbs and coming up with estimations and methodologies and have a pub chat about it is good fun, but that's about all it is.
 

TRENDING THREADS