The Powermeter Thread

Page 32 - Get up to date with the latest news, scores & standings from the Cycling News Community.
Aug 30, 2010
3,838
529
15,080
Would that explain why in most peoples world a Corvette is faster than a Ford Pinto, while in other peoples world it is not? Because there can never be improvements?
 
Mar 18, 2009
2,553
0
0
FrankDay said:
part II of your Webinar is about "problems with the current model" so, I can read between the lines and I think it says that you know there are some substantial deficiencies with the current model use by training peaks

Sorry, but you are simply wrong. I'm going to be talking about the limitations of various models of the exercise intensity-duration relationship that have been described in the scientific literature, not anything that has been implemented (or likely will ever be implemented) by TrainingPeaks.
 
Sep 23, 2010
3,596
1
0
acoggan said:
Sorry, but you are simply wrong. I'm going to be talking about the limitations of various models of the exercise intensity-duration relationship that have been described in the scientific literature, not anything that has been implemented (or likely will ever be implemented) by TrainingPeaks.
So, the model currently used by training peaks has not been published in or validated by the scientific literature?

You were the one who kept saying during the first episode that "all models are wrong" and, of course, you were correct. Why can't you bring yourself to say that there are issues with the old model used by training peaks and there will be issues (hopefully fewer issues) with the new model. Or, is this webinar all about salesmanship and not about science, as advertised.
 
Mar 18, 2009
2,442
0
0
FrankDay said:
So, the model currently used by training peaks has not been published in or validated by the scientific literature?

You were the one who kept saying during the first episode that "all models are wrong" and, of course, you were correct. Why can't you bring yourself to say that there are issues with the old model used by training peaks and there will be issues (hopefully fewer issues) with the new model. Or, is this webinar all about salesmanship and not about science, as advertised.

Frank, do you have a scientific bone in your body? Scientific inquisitiveness and research is all about moving forward and improving on what we know based on the results of scientific studies. Older models become outdated because of new research findings. That's called progress. It does not make the older models wrong, it just makes them older.
 
Sep 23, 2010
3,596
1
0
Alex Simmons/RST said:
Frank, you're the sort of person who adds two and two and gets i


Simple example of what Andy was talking about (i.e. the principles of modelling):

Are the Newtonian laws of motion and gravity a really crappy model?

The answer of course is no, except when they are. Then Einstein came along and introduced the special and general theories of Relativity, which expanded the range of usefulness of the classical Newtonian models.

IOW just because one approach is not perfect (and was never touted as such), does not mean better and more refined approaches cannot be explored and tested to see if their domain is more broadly applicable.

Good coaches have been doing such intuitively for a long time but means to reliably quantify such things is most useful.

But in your world you instead present the false dichotomy, which is an intellectually lazy and disingenuous form of argument, and people see right through your fallacious logic.
You guys don't seem to understand what a model is and what it is supposed to do.

Newton did not propose a model, Newton proposed laws that dictated the physical world. It turned out that his laws were deficient at the extremes such that they now constitute a "model" but it is so good that they are still referred to as "Newton's laws" because it works flawlessly for 99.99% of all human endeavors.

A model is only a simpler way to describe complex systems to make is possible to work with those systems practically. The only reason to have a model is to allow one to predict outcomes. There are models to predict the weather, global warming, etc. As new knowledge is gathered models are adjusted such that competing models would, hopefully, approach a mean with the result actually being pretty good for most situations.

A model has no purpose unless it has a practical use. Most models are useful to help scientists in better understanding complex systems. They hardly ever have a use that the individual uses.

So, what is the practical use of the training peaks model (or Golden Cheetah, or any cycling power model)? I would submit the purpose is to provide information to the athlete to allow him/her to better direct their training and to help them perform better when racing. But, how does it do that and how well does it do that?

There are several issues that I see in the old models, some of which were alluded to in Dr. Coggan's pt 1 lecture.

1. The model doesn't take into account the position the athlete is in when gathering data or the environmental conditions when gathering data. It is all lumped together. But, the position the athlete is in affects what the athlete can do. How hot or cold it is affects the athletes power profile. But, it is all lumped together. How to interpret this in predicting what the athlete should be doing in training or in an upcoming race?

2. The model doesn't take into account the individual make-up of the athlete. Does the athlete have a lot of fast-twitch or a lot of slow twitch or a combination of muscle fibers. It is a one-size fits all model. It seems his next iteration will attempt to address this problem.

3. While the model might be (or become) wonderful for predicting what or where an athlete should be from a power perspective power is only a small part of the complex racing equation. There is zero validation that having this information makes any difference to the athlete in a race.

But, let's presume the model is perfect. Then, the next question is how useful is it. If we presume the entire reason for the individual to use this model is to help the athlete train and race better what is the evidence that it does this? In other words, is it better than other methods of evaluating effort in training and effort/pacing in racing? Again, there is no evidence that the model provides such an advantage to the individual, on average.

I understand the draw of a power meter. I understand the enthusiasm for a power meter. But, I suspect, most users got the device with the hope it would transform and improve their training and racing. (There is an entire book written to help them do this.) I am all for technological advances being used to improve athletes but overload training is overload training and easy days are easy days whether one measures them or not. I hope that this technology can actually, someday, do (or be shown that it does) what people hope that it will do. But, right now, there is simply zero evidence that it does what most hope it will do.
 
Apr 21, 2009
3,095
0
13,480
FrankDay said:
But, right now, there is simply zero evidence that it does what most hope it will do.

Frank, stop trolling. Stop persisting with this nonsense. Who do you think you are fooling?

It's a measurement device. What measurement device does improve performance. Does a stopwatch improve performance. Does a HR monitor improve performance? Do scales make one lose weight better. Does a skinfold test make one lose body fat faster. Does a speedometer make a Ferrari faster than a Lada?

People, keep reporting Frank!
 
Mar 18, 2009
2,553
0
0
FrankDay said:
So, the model currently used by training peaks has not been published in or validated by the scientific literature?

Both Bannister's original impulse-response model and my more practical Performance Manager variation have been successfully used in published research studies.

FrankDay said:
Why can't you bring yourself to say that there are issues with the old model used by training peaks and there will be issues (hopefully fewer issues) with the new model.

??

Of course there are limitations to both the original impulse-response model and to my variation thereof...that's why I've been seeking a better alternative for the last ~10 y.
 
Mar 18, 2009
2,553
0
0
FrankDay said:
You guys don't seem to understand what a model is and what it is supposed to do.

Newton did not propose a model, Newton proposed laws that dictated the physical world.

<<shakes head in wonderment and disbelief>>
 
Sep 23, 2010
3,596
1
0
elapid said:
Frank, do you have a scientific bone in your body? Scientific inquisitiveness and research is all about moving forward and improving on what we know based on the results of scientific studies. Older models become outdated because of new research findings. That's called progress. It does not make the older models wrong, it just makes them older.
Take up your complaints with Dr. Coggan. He is the one who repeated over and over that all models are wrong. The old ones, the new ones, all of them. The only question is how wrong and when.

But, beyond this, the next question is what is the model is used for? What benefits comes from having it? Is it an intellectual curiosity or does it serve a useful purpose? This is the question that you folks are ignoring?

Fergie keeps pointing out that a PM is just a measuring device and that no performance improvement can come from using it.
It's a measurement device. What measurement device does improve performance
Then why use it?
 
Sep 23, 2010
3,596
1
0
acoggan said:
Of course there are limitations to both the original impulse-response model and to my variation thereof...that's why I've been seeking a better alternative for the last ~10 y.
Cool. It isn't obvious to the masses that this is the case. I look forward to listening to your part II where you go into the issues that you see in current systems (including your own) and then, how you hope reduce (it is impossible to eliminate) them in the subsequent model.

The problem I see for you is making the model simple introduces a lot of errors. The only way of correcting these is to introduce more complexity into the model, making it less user friendly. I look forward to seeing what you have done.
 
Sep 23, 2010
3,596
1
0
acoggan said:
Both Bannister's original impulse-response model and my more practical Performance Manager variation have been successfully used in published research studies.
The issue, from my perspective, is not whether the "impulse-response" model has been validated but whether there is an advantage of using one implementation of the theory over another. It is generally assumed by the masses (at least those who own PM's) that your implementation is superior yet I can't find any support for that. Is there any?
 
Apr 21, 2009
3,095
0
13,480
FrankDay said:
The issue, from my perspective, is not whether the "impulse-response" model has been validated but whether this is an advantage of using one implementation of the theory over another. It is generally assumed by the masses (at least those who own PM's) that your implementation is superior yet I can't find any support for that. Is there any?

More Trolling. The Perfection Fallacy. Something is not perfect therefore bad.
 
Mar 18, 2009
2,553
0
0
FrankDay said:
Cool. It isn't obvious to the masses that this is the case. I look forward to listening to your part II where you go into the issues that you see in current systems (including your own) and then, how you hope reduce (it is impossible to eliminate) them in the subsequent model.

The problem I see for you is making the model simple introduces a lot of errors. The only way of correcting these is to introduce more complexity into the model, making it less user friendly. I look forward to seeing what you have done.

Again, you're confused: I'm not going to be talking about any models attempting to quantitatively relate training to performance. What I am going to be talking about are models of the power-duration relationship.
 
Mar 18, 2009
2,553
0
0
FrankDay said:
The issue, from my perspective, is not whether the "impulse-response" model has been validated but whether there is an advantage of using one implementation of the theory over another. It is generally assumed by the masses (at least those who own PM's) that your implementation is superior yet I can't find any support for that. Is there any?

My variation on Bannister's impulse-response is superior only in the sense that it can be readily implemented in the real world, whereas the original model cannot. This is because the amount of data required to solve the original model (w/ four adjustable parameters) with adequate precision is far more than is typically available outside of a research study. IOW, the choices are:

1) ignore all the published studies;

2) use my approach; or

3) use Bannister's original model and pretend that the results are more trustworthy than they really are.
 
Mar 18, 2009
2,442
0
0
FrankDay said:
Take up your complaints with Dr. Coggan. Fergie keeps pointing out that a PM is just a measuring device and that no performance improvement can come from using it. Then why use it?

See post #708. You do not compute.

My comments are not directed at Dr. Coggan, they are directed at you. You are the one that does not seem to be able to accept that scientific research result in improvements which change or modify what we do today, but that these changes do not mean what we did yesterday was wrong.
 
Mar 18, 2009
2,553
0
0
FrankDay said:
Take up your complaints with Dr. Coggan. He is the one who repeated over and over that all models are wrong. The old ones, the new ones, all of them. The only question is how wrong and when.

But, beyond this, the next question is what is the model is used for? What benefits comes from having it? Is it an intellectual curiosity or does it serve a useful purpose?

Well, at least you seem to have learned something from watching the webinar...

(BTW, in case you missed it the quote "all models are wrong, but some are useful" comes from George Box: http://en.wikipedia.org/wiki/George_E._P._Box.)
 
Mar 10, 2009
2,973
5
11,485
FrankDay said:
Newton did not propose a model, Newton proposed laws that dictated the physical world. It turned out that his laws were deficient at the extremes such that they now constitute a "model" but it is so good that they are still referred to as "Newton's laws" because it works flawlessly for 99.99% of all human endeavors.

Frank, you might just be surprised.

e.g. anyone that uses a GPS computer like a Garmin on their bicycle handlebars. Newton's laws fall over quite quickly in this scenario, and it requires the application of both of Einstein's theories of relativity to provide reliable and accurate positional data.
 
Jul 5, 2012
2,878
1
11,485
I'm leaving this convo up rather than spending hours editing out the rubbish or deleting the entire lot.... as there is a lot of debunking instead.

Yes as Coach Fergie said ,Please use the report feature in the future rather than feed the trolls.

Cheers
Bison
 
Sep 23, 2010
3,596
1
0
Alex Simmons/RST said:
Frank, you might just be surprised.

e.g. anyone that uses a GPS computer like a Garmin on their bicycle handlebars. Newton's laws fall over quite quickly in this scenario, and it requires the application of both of Einstein's theories of relativity to provide reliable and accurate positional data.
Yes, relativity is coming into play in some human endeavors. However, up until about 50 years ago relativity was an intellectual curiosity and Newton's laws ran the world. Soon we will see quantum theory involved in some activities. But, for the purposes of describing what is necessary to making a bike go any particular speed, newton's laws are all that are necessary. F=ma. No modeling required.
 
Sep 23, 2010
3,596
1
0
acoggan said:
My variation on Bannister's impulse-response is superior only in the sense that it can be readily implemented in the real world, whereas the original model cannot. This is because the amount of data required to solve the original model (w/ four adjustable parameters) with adequate precision is far more than is typically available outside of a research study. IOW, the choices are:

1) ignore all the published studies;

2) use my approach; or

3) use Bannister's original model and pretend that the results are more trustworthy than they really are.
There is no doubt that a power meter can be used to effectively utilize all kinds of training protocols, including the impulse-response protocol you prefer. However, runners have been using this type of training effectively for years without the aid of a power meter as have many cyclists. Therefore, what I am asking for here is this: is there any evidence that using a power meter to facilitate this kind of training is more effective than using the other kinds of feedback useful in this kind of training used by runners or cyclists using other techniques? It isn't a question of whether it works but whether one method is superior to another.

Just because power is used to gauge the effectiveness of a training program is not a particularly good reason that power need be used as an integral part of the training program, unless there is some evidence that doing so is superior.
 
Sep 23, 2010
3,596
1
0
acoggan said:
Again, you're confused: I'm not going to be talking about any models attempting to quantitatively relate training to performance. What I am going to be talking about are models of the power-duration relationship.
Hmmm. It was my understanding the reason to develop a good model for the power duration relationship was so the athlete might be able to optimize power for a distance that they have no experience with directly. How can the Ironman athlete know what power to sustain for a 5-6 hour bike when their longest training ride is 2-3 hours? How can a rider know what power they should ride for a 24 hour race or the Race Across America? That seems like the model would be useful, if it were a reliable and accurate model, to help riders optimize performance.

Yet, you say you aren't going to be talking about relating training to performance. I guess the question then is how do you expect the average user of your model to use it? What do you expect them to get from it?