- Apr 21, 2009
- 3,095
- 0
- 13,480
Alex Simmons/RST said:The data matches Joaquin's. 'nuff said.
That bad aye!!!
Alex Simmons/RST said:The data matches Joaquin's. 'nuff said.
CoachFergie said:It's so cool having these tools so readily available to disprove Frank's and your bogus claims.
Why don't you contact him and get it then post it for all of us. I accept Dr. Cheung knows how to design a study and do the statistics such that the summary (abstract) reflects the reality.CoachFergie said:You would think that Frank would have got the study from Dr Stephen Cheung who was an author of the study. Who is actively reviewing cycling products and performing exercise physiology research.
seriously, you do not know what aerobic and anaerobic training is split 80/20?JamesCun said:What is aerobic/anaerobic training?
Indeed, yet you would rather stick with your bias than explore such a possibility for youA change of 15% vo2 in a 6week period is huge.
Perhaps. That is why no single study is perfect and needs to be repeated to confirm results and to check for alternative explanations. However, we expect this to be confirmed since individuals who have tested themselves have reported similar increases (over a longer period of time though) (see edit below).My first assumption would be measurement flaws or poor initial fitness of the subjects. Maybe they did the study after a huge training block and everyone was depressed. 6 weeks of recovery/training brought them back to normal levels.
you are free to contact the authors to see what you can get. Cheung is well known and available.The fact that you can't get access to the data is very telling. What legitimate researcher disappears and refuses to pass on data??
The study abstract was made public at a national meeting and is available to anyone to use. I accept the findings as being true until someone shows they are not. Why wouldn't I use it, especially when it backs up what our customers report (although their results came a little faster than what our typical customer reports but I think they were more aggressive in their early training than our typical customer.) What can I say? I defend it because there is no evidence there is anything wrong with it. The fact you and others do not like the design (or the results) is not evidence there is anything wrong with what they did. And, the burden is on you, if you don't like the study results, to show they are wrong. I, for one, believe them until I am shown otherwise as they were accepted by the CSEP for oral presentation at their annual meeting. The fact I have been unable to get hold of the entire study, despite trying, does not invalidate the study nor the abstract/results published.JamesCun said:Frank, the fact that you are willing to use, promote and defend a study that has no available info or data is surprising. Seems you had these exact same discussions in 2007, with the same outcomes. Also strange that you feel the burden is on everyone else to back up your claims and track down the info. If I was promoting a product, I would hunt down the studies that supported it,not badger others to do that for me. Tells me that the real data isn't so supportive if PC usage...
JamesCun said:Frank, the fact that you are willing to use, promote and defend a study that has no available info or data is surprising. Seems you had these exact same discussions in 2007, with the same outcomes. Also strange that you feel the burden is on everyone else to back up your claims and track down the info. If I was promoting a product, I would hunt down the studies that supported it, not badger others to do that for me. Tells me that the real data isn't so supportive of PC usage...
JamesCun said:Frank, the fact that you are willing to use, promote and defend a study that has no available info or data is surprising.
elapid said:Not surprising at all. Frank is all marketing and no science. He wouldn't know science if it hit him over the back of the head with a baseball bat. Funny how Frank uses the same argument (bad study, poor design, etc) when the study results don't agree with his personal marketing strategies.
LOL. It seems to me a particularly bad study design to test the usefulness of the cranks to use them part-time for 5-6 weeks when the manufacturer (me) says the typical new user doesn't start to see any real benefit until 6 weeks of exclusive use. You can argue that "I don't know science" if you want to but I do know my product and any such design is, most likely, only useful in proving our observation. You want benefits you need to use them a lot and for a pretty long time. I object to people claiming those studies disprove our 40% power improvement after 6-9 month exclusive use claim.elapid said:Not surprising at all. Frank is all marketing and no science. He wouldn't know science if it hit him over the back of the head with a baseball bat. Funny how Frank uses the same argument (bad study, poor design, etc) when the study results don't agree with his personal marketing strategies.
BikeGrip said:I have no idea whether these things "improve performance" or not.
However, for all the "scientist" out there, exactly how would you put a test together to determine whether they do "improve performance" or not?
Seriously, you have 100 riders use them? Then they either get better or they don't? Did they even try? Did they try to under perform on the first test to help improve their numbers later? Are you going to get race data from 100 pros so you know it isn't being gamed by the testers? (Yes, people change their performance when they are being tested -- as I am sure you "scientist" know.)
So, that is my question: What test (in detail) would satisfy the question of whether a particular training device actually improved performance? By "detail" I mean, sample sizes, study duration, control groups, controlled data (weather, fatigue, weight, caffeine, etc) -- you know all the stuff you "scientists" gather.
Also, I want to know across all categories of riders -- so be sure to include details about how many Cat I, II, III, recreational, overweight, and obese individuals you have in each group. After all, a training device that is proven to work only with obese individuals is hardly of use for Cat I riders. Likewise, a device that works for Cat I riders may be of limited to no benefit to recreational riders.
There are lots of ways to run a study and to control for effort. And, it depends upon what you are trying to do. Let's take PowerCranks as an example. If you are simply trying to show that they work to improve performance for an equal amount of training not using them then you randomize your subjects to either use them or not. I have also considered that it might be useful to have a "placebo" group in that you put the big heavy PowerCranks on the bike but in dual mode and tell that group that these heavy cranks are designed to help the rider pull up on the back stroke. Then, you would have three groups. You could also have a partial use group, then you have 4 groups (this would let you test whether partial use can be as effective, assuming they work, as exclusive use). The more groups you have the more subjects you need, especially if the change one sees is small, and getting subjects is one of the more difficult aspects of a study. Then one can do the pre-testing. One good control regarding effort is to compare HR to the riders report of perceived effort. In general, HR correlates pretty well in any given person to their relative effort and their perception of effort such that if my max HR pre testing is 160 and it is 180 post testing then we can presume they were dogging it the first test. But, if it is 168 first test and 169 second test we can presume the efforts were similar. Then you can have each individual keep a training log, trusting them to be honest regarding their usage (most people are). Or, you can have them do all of their training in the lab on the lab bike where time and intensity can be measured but trusting them to not do anything outside of the lab. Then you do your post testing after they have completed the test intervention. Once all the data is gathered then one does the statistical analysis and then one tries to interpret what it all means. For instance, if one sees a trend but the data doesn't reach statistical significance one might conclude that even though this data doesn't demonstrate a difference that another study that either lasted longer or had more people might demonstrate a difference. This helps the next researcher design a more powerful study that might uncover and demonstrate there is, indeed, a difference.BikeGrip said:I have no idea whether these things "improve performance" or not. However, for all the "scientist" out there, exactly how would you put a test together to determine whether they do "improve performance" or not?
Seriously, you have 100 riders use them? Then they either get better or they don't? Did they even try? Did they try to under perform on the first test to help improve their numbers later? Are you going to get race data from 100 pros so you know it isn't being gamed by the testers? (Yes, people change their performance when they are being tested -- as I am sure you "scientist" know.)
So, that is my question: What test (in detail) would satisfy the question of whether a particular training device actually improved performance? By "detail" I mean, sample sizes, study duration, control groups, controlled data (weather, fatigue, weight, caffeine, etc) -- you know all the stuff you "scientists" gather. Also, I want to know across all categories of riders -- so be sure to include details about how many Cat I, II, III, recreational, overweight, and obese individuals you have in each group. After all, a training device that is proven to work only with obese individuals is hardly of use for Cat I riders. Likewise, a device that works for Cat I riders may be of limited to no benefit to recreational riders.
BikeGrip said:So, that is my question: What test (in detail) would satisfy the question of whether a particular training device actually improved performance?
The typical cyclist/triathlete can increase power on the bicycle 40% in about 6 months (that is a 7 minute improvement for a 60 minute TT effort!)
Cyclists: most increase cycling speed about 2-3 mph (that is about 40% in increased cycling power) in less than one season.
Our typical cyclist-triathlete customer is reporting speed improvements of about 2-3 mph after about 6-9 months of serious PowerCranks use.
Cyclists gain more power and speed compared to the same amount of training time on traditional cranks. (Cyclists typically see 2-3 mph speed improvement in 6-9 months..
If you are an average athlete (elites take longer) you can be ... cycling faster in 6 weeks if you use them as we recommend. Continue and see ... 2-3 mph cycling improvment in 9 months on average.
I think it is clear that designing and completing scientific studies that involve many variables and require substantial time to demonstrate are extremely difficult to design and complete. Those who demand same are doing so knowing that it is almost impossible to do. Knowing that such a study is unlikely to appear they can feel smug in their accusations that the lack of a study proves our claims have no basis while choosing to ignore the scientific and anecdotal evidence that does exist that supports the device. I presume further studies will continue to happen and maybe a study like Dixon will happen with a "proper" control group (one these guys understand) with convincing results, although I don't think anything can change there mind. We will, I guess, continue to be stuck selling to those willing to take a "leap of faith" to see if they do anything for "you". Maybe the new devices that measure technique will convince these folks that technique really does matter.BikeGrip said:Coach, Alex, et al. Thanks for the input and good points... For Coach, I had looked at some of the studies you posted originally (there a quit a few). Essentially, some of these so called "scientific" studies seem a little pointless to me.
For instance, the "Training With Independent Cranks Alters Muscle Coordination Pattern in Cyclists Fernández-Peña, Eneko1,2; Lucertini, Francesco1; Ditroilo, Massimiliano1,2" study.
Ok, the conclusion...." The results provide scientific support for muscle coordination pattern alteration from the use of IC, potentially achieving a more effective pedaling action."
Hmm.. Sounds good. But the study looked at 60 second periods at 30-50% of max power. I don't really see the functional use for this. First, it would depend on how "max power" was obtained. Perhaps this "maximal pedaling test" is a standardized test or the exact details were given in the full article, but even if it were, the 60 second time period is also highly suspect. So, I pedal better for 60 seconds, then what? I am not that fast to finish my TT in 60 seconds -- but hey, I ride a Giant, so I am slow.
I could go through more, but it seems we agree on the conclusion ... "Nothing to see here" -- my issue is that it just seems that constructing any test that would be worth seeing would be extremely costly. Given that there is no patent protection for this rather simple device that has been around for decades, no company has any incentive to invest the type of money required to prove it. Even if a company did, then any other company could then come along and market the "scientifically proven" IC system without having to recover any costs from an expensive trial.
This actually leads to an interesting economic result... companies with products that actually work (but are not subject to patents) have an incentive NOT to publish scientific evidence that conclusively proves their claims. As Alex points out that, if it actually worked, then everyone would use it. While true, it would also be true that everyone would sell them -- because the market is the entire biking community. Instead, a smaller company would be far more profitable to make vague unsubstantiated claims or reference vague and inconclusive "studies" and then rely on its marketing and distribution channels to generate profits (which is how 99.9% of any sports/fitness companies make their money.) As soon at it is established that the device is universally a good thing, then sales and marketing become irrelevant and manufacturing and economics of scale dominate -- which would favor large established companies.
It would also seem that for independent cranks, a "placebo" control group would be impossible -- I don't think Frank's idea would fool too many people. As these tests generally run fairly small sample sizes (10-20), it would seem that one or two fanatics (on either side) could skew the results.
Obviously, as Alex points out, anyone can make unsubstantiated claims and if someone is promising me a "40% power increase," I offer them this bridge in Brooklyn in exchange for the cranks. I certainly am not saying these claims are remotely true -- just that I understand why no substantiated claims exist. (and it is not necessarily that the cranks simply don't do anything).
FrankDay said:I think it is clear that designing and completing scientific studies that involve many variables and require substantial time to demonstrate are extremely difficult to design and complete. Those who demand same are doing so knowing that it is almost impossible to do. Knowing that such a study is unlikely to appear they can feel smug in their accusations that the lack of a study proves our claims have no basis while choosing to ignore the scientific and anecdotal evidence that does exist that supports the device. I presume further studies will continue to happen and maybe a study like Dixon will happen with a "proper" control group (one these guys understand) with convincing results, although I don't think anything can change there mind. We will, I guess, continue to be stuck selling to those willing to take a "leap of faith" to see if they do anything for "you". Maybe the new devices that measure technique will convince these folks that technique really does matter.
Dr. Larry Creswell, a triathlete and heart surgeon at the University of Mississippi School of Medicine, whose Athlete’s Heart blog discusses cardiac health for athletes, pointed out that conference presentations, unlike medical journal articles, haven’t yet gone through peer review. “Essentially, if you’re invited to speak at a meeting you can say what you want – whether it’s scientifically correct or not,” he wrote when the findings were first presented. Others questioned the statistical methods used to analyze the study’s data.
BikeGrip said:For instance, the "Training With Independent Cranks Alters Muscle Coordination Pattern in Cyclists Fernández-Peña, Eneko1,2; Lucertini, Francesco1; Ditroilo, Massimiliano1,2" study.
Ok, the conclusion...." The results provide scientific support for muscle coordination pattern alteration from the use of IC, potentially achieving a more effective pedaling action."
Hmm.. Sounds good. But the study looked at 60 second periods at 30-50% of max power. I don't really see the functional use for this. First, it would depend on how "max power" was obtained. Perhaps this "maximal pedaling test" is a standardized test or the exact details were given in the full article, but even if it were, the 60 second time period is also highly suspect. So, I pedal better for 60 seconds, then what? I am not that fast to finish my TT in 60 seconds -- but hey, I ride a Giant, so I am slow.
I could go through more, but it seems we agree on the conclusion ... "Nothing to see here" -- my issue is that it just seems that constructing any test that would be worth seeing would be extremely costly.
Given that there is no patent protection for this rather simple device that has been around for decades, no company has any incentive to invest the type of money required to prove it. Even if a company did, then any other company could then come along and market the "scientifically proven" IC system without having to recover any costs from an expensive trial.
This actually leads to an interesting economic result... companies with products that actually work (but are not subject to patents) have an incentive NOT to publish scientific evidence that conclusively proves their claims. As Alex points out that, if it actually worked, then everyone would use it. While true, it would also be true that everyone would sell them -- because the market is the entire biking community.
Instead, a smaller company would be far more profitable to make vague unsubstantiated claims or reference vague and inconclusive "studies" and then rely on its marketing and distribution channels to generate profits (which is how 99.9% of any sports/fitness companies make their money.) As soon at it is established that the device is universally a good thing, then sales and marketing become irrelevant and manufacturing and economics of scale dominate -- which would favor large established companies.
It would also seem that for independent cranks, a "placebo" control group would be impossible -- I don't think Frank's idea would fool too many people. As these tests generally run fairly small sample sizes (10-20), it would seem that one or two fanatics (on either side) could skew the results.
Obviously, as Alex points out, anyone can make unsubstantiated claims and if someone is promising me a "40% power increase," I offer them this bridge in Brooklyn in exchange for the cranks. I certainly am not saying these claims are remotely true -- just that I understand why no substantiated claims exist. (and it is not necessarily that the cranks simply don't do anything).
Alex Simmons/RST said:On the issue of studies presented at conferences vs actually published after peer review, here's what Dr Larry Creswell had to say on that, with respect to a different study where many of the original presentation conclusions were overturned once it went through a peer review process.
From this item.
FrankDay said:I am sure elapid, with his self stated expertise on study design will correct me where I am wrong and expand on what is necessary to help you out here.
That is one of the most nonsensical inferences ever in view of the fact that current grand tour winners are using the product. How on earth do you expect a cat 2 rider to suddenly become the equivalent of a grand tour winner by suddenly training with a device the grand tour winner is already using? Now that is not to say that a Cat 2 rider couldn't improve enough in one season to become a pro because it has happened. I think all you could reasonably say is that a 2-3 mph improvement is likely to get the user a lot closer to the grand tour riders but not good enough to make the team or win the race.Alex Simmons/RST said:Others have given thoughts on that, but consider for a moment the claims being made by Frank on his website for many years.
2005:
2008:
2011:
2012:
Today:
Since the 2-3 mph improvement claim is the most consistent one used, let's stick with that.
That would mean a regular Cat 2 could become a grand tour winning rider. It's just nonsense of epic proportions.
Yes, let's look at those numbers for a 70 kg rider and assume those are max efforts. First notice that it takes a smaller percentage increase for the faster rider to see a 2-3 mph increase. But, of course, the slower rider is starting at a lot lower power so improvement should come easier, to whit:Or to put that into power terms for a rider + bike of 77kg, average CdA in the drops and rolling resistance on flat road, no wind:
A 2W/kg rider would need to increase power by 31% - 49%.
A 3W/kg rider by 27% - 43%
A 4W/kg rider by 25% - 39%
A 5W/kg rider by 23% - 36%
Now those are impressive numbers. Frank has himself on many occasions claimed a 40% improvement in power value and we can see it on his own website at times.
No one would claim that the three examples above were unfit or untrained to begin with, would they? And, being experienced racers all of them it is unlikely any changed their aerodynamics substantially, at least none reported such change, attributing all the change seen to the PowerCranks. Such improvements happen all the time.And no one says such an improvement is not possible through training, better aerodynamics and other means of reducing resistance forces, especially if you are unfit or untrained to begin with.
While it has not been scientifically proven that such improvements are due to the use of the cranks, it is the assessment of those using the cranks that the cranks were the proximate cause of the big improvements they saw. None of these observers have anything to gain from these reports. Why do you, someone with zero experience with the product, find them so unbelievable, other than the improvements are so large they must be lying?The question however is whether such an improvement (any improvement) in speed and/or power is due to the cranks and would be over and above that attainable training on regular cranks.
So far that has not been demonstrated to any degree, let alone enabling a rider to add 2-3 mph to their speed.
Nope, will never happen. Some have managed to fulfill their dream of becoming a pro after getting on PowerCranks but such changes in someone training 8-10 hours a week will never overtake the base and experience of a pro tour rider with years of training 20 hours a week. Expecially, when the pros are using them also. There is more to getting really really good than simply slapping PC's on your bike.If it had we would be seeing trained Cat 4 power crankers winning Cat 1 races, trained Cat 2 power crankers riding Pro Tour riders off their wheels, and so on. It's just a load of nonsense.
Yep, and the evidence for most of our customers is what they experience for themselves. Beats anything they can read in a journal somewhere."Extraordinary claims require extraordinary evidence" - Marcello Truzzi
You do know that Computrainer offers a Performance Improvement Guarantee, improve 10% in 5 months or your money back? Do you know if that claim is backed by any credible evidence? They are the only other company that makes any kind of performance improvement claim that I know of yet I am unaware of any scientific data that supports their statement. I suspect they are simply comfortable that they have enough internal data to support that guarantee. It is the same with us.CoachFergie said:To sum the current state of play is that no claims made by the manufacturers of independent cranks have been supported with credible evidence.
FrankDay said:That is one of the most nonsensical inferences ever in view of the fact that current grand tour winners are using the product.
Really, is that what he said? Wonder then why they showed up on his bike at training camp? And, even if so, so many other TDF and World Champs have and do use them that it really doesn't matter.elapid said:If you are referring to Nibali, as you know he used it once for an hour and twittered that he never wanted to use it again. Glowing endorsement! As per usual, you twist everything for pure marketing BS.
