The Powercrank Thread

Page 30 - Get up to date with the latest news, scores & standings from the Cycling News Community.
Sep 23, 2010
3,596
1
0
elapid said:
Frank, it is you who have no idea of a control group. The same group cannot serve as its own control.
Sure they can? How on earth can anyone do any climate research if one requires a control group? Where is the control planet. All a control group provides is an expectation of the null hypothesis outcome. If we have a historical basis for saying "the normal variation is X" then we can compare our current data to that variation. That is how climate scientists can say with certainty that we are in a warming period. The only controversy (and it isn't as controversial as it is loud) is the reason for the warming period.

A similar design was used in the Dixon Study. It ain't rocket science, it is statistics so it is tougher.
 
Sep 23, 2010
3,596
1
0
JayKosta said:
============================================
Yes, that 'was' my understanding prior to Frank's assertion that training with PowerCranks can provide 'more improvement' than is possible with regular cranks.

Also, I do understand your points about PMs being useful as a: test / monitor / evaluate tool.

Jay Kosta
Endwell NY USA
If improvement were just about intensity then one would suspect (as many do) that power meters would make a difference. Intensity, of course, is an element of improving but measuring it has never been shown to be more important than feeling it (perceived exertion). We believe the main benefit to training with PowerCranks is the training of a new, more efficient, more powerful, technique.
 
Mar 10, 2009
2,973
5
11,485
FrankDay said:
Sure they can? How on earth can anyone do any climate research if one requires a control group? Where is the control planet. All a control group provides is an expectation of the null hypothesis outcome. If we have a historical basis for saying "the normal variation is X" then we can compare our current data to that variation. That is how climate scientists can say with certainty that we are in a warming period. The only controversy (and it isn't as controversial as it is loud) is the reason for the warming period.

A similar design was used in the Dixon Study. It ain't rocket science, it is statistics so it is tougher.

It's an issue of what you are controlling for. In that study the type of cranks used was not controlled for. Hence you cannot make any claim wrt the role of cranks, versus say the role of training, in the reported outcomes.
 
Mar 10, 2009
2,973
5
11,485
FrankDay said:
Ugh, this is what you wrote:I asked you where that came from? Since you invoked my name it seemed obvious you seemed to think I said that somewhere. Otherwise, I would have thought you might have said "Frank said no such thing" if you were trying to correct a misperception on Jay Kosta's part.

subtlety in communication is not your strong point
 
Mar 18, 2009
2,442
0
0
FrankDay said:
Sure they can? How on earth can anyone do any climate research if one requires a control group? Where is the control planet. All a control group provides is an expectation of the null hypothesis outcome. If we have a historical basis for saying "the normal variation is X" then we can compare our current data to that variation. That is how climate scientists can say with certainty that we are in a warming period. The only controversy (and it isn't as controversial as it is loud) is the reason for the warming period.

A similar design was used in the Dixon Study. It ain't rocket science, it is statistics so it is tougher.

Frank, everytime you type something in regards to statistics you look more and more stupid. The only reason you are still writing about an abstract of a study that has not been published in the 8 + years since it was presented as an oral abstract is because you are completely ignorant of study design and statistical methods.
 
Sep 23, 2010
3,596
1
0
Alex Simmons/RST said:
It's an issue of what you are controlling for. In that study the type of cranks used was not controlled for. Hence you cannot make any claim wrt the role of cranks, versus say the role of training, in the reported outcomes.
My friend, that is your OPINION. Apparently it was the opinion of the authors and the CSEP that there was a control for the type of cranks otherwise p-values, which are used to calculate the chance of the null-hypothesis being satisfied, could not have been calculated.
 
Sep 23, 2010
3,596
1
0
elapid said:
Frank, everytime you type something in regards to statistics you look more and more stupid. The only reason you are still writing about an abstract of a study that has not been published in the 8 + years since it was presented as an oral abstract is because you are completely ignorant of study design and statistical methods.
My friend, there is more than one way to skin a cat just as there is to design a study or to pedal a bicycle. You seem to be locked into just one way of doing either of those things. No study is perfect, Dixon is no exception. It is what it is but they calculated p-values so they tested the null-hypothesis. If their assumption was bad then, perhaps, their calculations are flawed but no one has yet to present any data to suggest that their assumption was bad. Power tends to drop in most people immediately after the racing season is completed, not go up.

Dixon hasn't come here and answered questions as to why he did what he did. But, neither has Burns come here and answered questions as to why he made some of the really dumb choices he did either, especially when it was promised he would do something completely different when we provided cranks for his study. Why would either one of them do that when there is so much hostility here?
 
Mar 18, 2009
2,442
0
0
FrankDay said:
My friend, there is more than one way to skin a cat just as there is to design a study or to pedal a bicycle. You seem to be locked into just one way of doing either of those things. No study is perfect, Dixon is no exception. It is what it is but they calculated p-values so they tested the null-hypothesis. If their assumption was bad then, perhaps, their calculations are flawed but no one has yet to present any data to suggest that their assumption was bad. Power tends to drop in most people immediately after the racing season is completed, not go up.

Dixon hasn't come here and answered questions as to why he did what he did. But, neither has Burns come here and answered questions as to why he made some of the really dumb choices he did either, especially when it was promised he would do something completely different when we provided cranks for his study. Why would either one of them do that when there is so much hostility here?

No, I am not commenting on Dixon's study. I am commenting on your continued inability to comprehend a control group. P-values were calculated on the same group before and after training with PCs. That is not a control group. It is very possible to calculate p-values without a control group, as shown by Dixon. As previously stated, everytime you discuss study design and statistics, you just end up looking incredibly stupid. I wish I could be more polite, but then again I know from past experiences that you would rather argue incessantly and ignore all the scientists and researchers on this thread telling you how monumentally you fail to understand this stuff. This is again appropriate for you:

640px-Triple-facepalm_zpsa04c6eb1.jpg


If Dixon is scared of coming on to this forum then it is more than likely because he does not want to be associated with you and your ignorance of study design and statistics.
 
Apr 21, 2009
3,095
0
13,480
Can someone loan Frank a stats and research design text. All this laughing is killing me. I might need a real Doctor.
 
Sep 23, 2010
3,596
1
0
elapid said:
No, I am not commenting on Dixon's study. I am commenting on your continued inability to comprehend a control group. P-values were calculated on the same group before and after training with PCs. That is not a control group. It is very possible to calculate p-values without a control group, as shown by Dixon. As previously stated, everytime you discuss study design and statistics, you just end up looking incredibly stupid. I wish I could be more polite, but then again I know from past experiences that you would rather argue incessantly and ignore all the scientists and researchers on this thread telling you how monumentally you fail to understand this stuff. This is again appropriate for you:

640px-Triple-facepalm_zpsa04c6eb1.jpg


If Dixon is scared of coming on to this forum then it is more than likely because he does not want to be associated with you and your ignorance of study design and statistics.
Whatever..
 
Apr 21, 2009
3,095
0
13,480
To be fair Frank has been revisioning physiology and physics for the last 13 years so why not add scientific method and mathematics to the list?
 
Jun 1, 2014
385
0
0
FrankDay said:
My friend, there is more than one way to skin a cat just as there is to design a study or to pedal a bicycle. You seem to be locked into just one way of doing either of those things. No study is perfect, Dixon is no exception. It is what it is but they calculated p-values so they tested the null-hypothesis. If their assumption was bad then, perhaps, their calculations are flawed but no one has yet to present any data to suggest that their assumption was bad. Power tends to drop in most people immediately after the racing season is completed, not go up.

Dixon hasn't come here and answered questions as to why he did what he did. But, neither has Burns come here and answered questions as to why he made some of the really dumb choices he did either, especially when it was promised he would do something completely different when we provided cranks for his study. Why would either one of them do that when there is so much hostility here?

You make so many assumptions here. No one can challenge their assumptions because no one has any of the data they used.
What was their training before the intervention?
What level were they at and how long have they been training?
Was the first test a true max test, were they rested for the test?
Was the training with PC relative to the training they did prior to intervention, or was it standard across all riders?

The fact that they calculated p-values doesn't mean they rejected the bull hypothesis, it just means they calculated some values based on assumptions. If you make the wrong assumptions, the statistics are meaningless. The fact that CSEP accepted it for oral presentation doesn't mean anything at all, and you know that.
 
Sep 23, 2010
3,596
1
0
JamesCun said:
You make so many assumptions here. No one can challenge their assumptions because no one has any of the data they used.
What was their training before the intervention?
What level were they at and how long have they been training?
Was the first test a true max test, were they rested for the test?
Was the training with PC relative to the training they did prior to intervention, or was it standard across all riders?

The fact that they calculated p-values doesn't mean they rejected the bull hypothesis, it just means they calculated some values based on assumptions. If you make the wrong assumptions, the statistics are meaningless. The fact that CSEP accepted it for oral presentation doesn't mean anything at all, and you know that.

Your questions are all valid questions. I doubt all those questions would be answered even if you had the full study as few studies (in my experience) go into such detail. Sometimes you have to trust the researcher to have done his or her job. If you don't believe the result you can always choose to repeat the study to confirm or refute (remember cold fusion?).

Further, I haven't made many assumptions here. I didn't design this study. I didn't do this study. I didn't write this study up. The senior author on the study is a quite experienced researcher so one might assume he knows what he is doing when it comes to study design and statistical computation and I would assume he wouldn't put his name on a study he didn't think was reasonably well done and valid but that is about it when it comes to assumptions. While it is true we do not have access to the raw data to check their results this is the case with many studies, even when one has the entire write-up. Lots of stuff is left out of even the most detailed published study because, even if it is initially put in by the authors, it will be sent back by the editors to be cut down as there is only so much room in the journal and other studies deserve to be published also. Choosing which of the many submissions they get to publish is one of the more difficult jobs of the editors. Those that don't make the cut for the journal tend to make an annual supplement to the journal where hundreds of study abstracts get published, and that is all you get. Want more on one of these and you have to contact the author.

I have made the assumption that the authors intended to do the best study they could with the resources and time available to them (why would they do anything else?) which resulted in the choices they made. I have simply tried to explain the study design and the basis of the statistical analysis such that one might understand the study was not completely without merit. Do you really believe the CSEP would choose a study for oral presentation at their annual meeting that they believed had zero merit? That such a choice by them means NOTHING? Really, NOTHING?!!!

You may not like the study design. You may not like the study results. But, the study exists, "published" by a reputable organization. It is the only study out there that has looked at immersion training and PowerCranks and it showed a benefit. Probably just a coincidence, I know. Anyhow, if you believe that then it would probably be more effective to repeat the study with a "real" control group to prove the null hypothesis than say come here and call me names for pointing out the study exists. (edit:Burns also used an immersion design but it only lasted 5 weeks and a careful analysis suggests that there actually was a positive result compared to their control. The problem with Burns was the control group got worse and this couldn't be explained by the authors although I think it might be explained by a careful analysis of their study design)

Essentially every choice you make in training has zero scientific basis. Essentially every choice you make in training and racing is based upon anecdotal reports and "gut" feeling of "experts." The reason for this is that it is very difficult to conduct any study (let alone an excellent study) in this area. At least people are trying to study PowerCranks. Dixon is the only one who has attempted to follow the manufacturers instructions for best benefit and has shown a benefit. You (and others) choose to ignore this result because it doesn't fit with your bias because you can criticize the design. Instead, you choose to believe "studies" that ignore the manufactures instructions, don't last very long, but have a control group and don't show a statistical difference.

The most difficult part of science is not in the collecting of the data but in the interpretation of the data. And, one can always use more data.
 
Mar 18, 2009
2,442
0
0
FrankDay said:
Anyhow, if you believe that then it would probably be more effective to repeat the study with a "real" control group to prove the null hypothesis than say come here and call me names for pointing out the study exists.

Frank, you are doing more than pointing out the study exists and you know it. You are arguing incorrectly that the study has a control group. It does not. You are arguing that the study shows a benefit for training on PCs because of some statistically significant results. It does not. All it shows is a statistically improved performance before and after using PCs. That's all. Because it does not include a control group, there is no way you can tell if this improvement was because of PCs or training effect.
 
Mar 18, 2009
2,442
0
0
FrankDay said:
The senior author on the study is a quite experienced researcher so one might assume he knows what he is doing when it comes to study design and statistical computation and I would assume he wouldn't put his name on a study he didn't think was reasonably well done and valid but that is about it when it comes to assumptions. While it is true we do not have access to the raw data to check their results this is the case with many studies, even when one has the entire write-up. Lots of stuff is left out of even the most detailed published study because, even if it is initially put in by the authors, it will be sent back by the editors to be cut down as there is only so much room in the journal and other studies deserve to be published also. Choosing which of the many submissions they get to publish is one of the more difficult jobs of the editors. Those that don't make the cut for the journal tend to make an annual supplement to the journal where hundreds of study abstracts get published, and that is all you get. Want more on one of these and you have to contact the author.

The reviewers and not the editors are the first line in what gets accepted, revised or rejected. Reviewers will often ask for more information and not less because more information better explains the study.

Do we even know if Dixon submitted the study for publication? In my field, less than 25% of studies presented in conferences end up being published. This is because of poor studies (and yes, they still get accepted for presentation at well-regarded meetings), larger and/or re-designed followup studies are started which make the earlier studies less inticing to submit, or reviewers reject the papers for various reasons from lacking scientific merit, not contributing to the scientific literature, or poor study design or staistical analyses. If Dixon is an experienced researcher then he probably already knows that this study was not publishable because of the obvious limitations of the study and this is just one of many studies that end up on the scrap heap.
 
Sep 23, 2010
3,596
1
0
elapid said:
The reviewers and not the editors are the first line in what gets accepted, revised or rejected. Reviewers will often ask for more information and not less because more information better explains the study.
That may be true when deciding to publish or not (I, myself, have had the editors come back and invite me to expand on a letter I wrote criticizing an editorial that was printed) but once that decision is made they may ask that the text/data be reduced in size to fit a length constraint, unless the study is a particularly important one. Show me a single study in which all of the questions asked about this study is included. Dixon is one of the few that includes some information regarding the length (8 hrs/week) and intensity (80/20 aerobic/anerobic) of the training. Burns, for instance, only states the length and intensity was matched between groups.
Do we even know if Dixon submitted the study for publication? In my field, less than 25% of studies presented in conferences end up being published. This is because of poor studies (and yes, they still get accepted for presentation at well-regarded meetings), larger and/or re-designed followup studies are started which make the earlier studies less inticing to submit, or reviewers reject the papers for various reasons from lacking scientific merit, not contributing to the scientific literature, or poor study design or staistical analyses. If Dixon is an experienced researcher then he probably already knows that this study was not publishable because of the obvious limitations of the study and this is just one of many studies that end up on the scrap heap.
Lots of crap gets accepted for publication. And, lots of good studies get rejected. Some of that has to do with the biases of the reviewers. Some has to do with what they think the readership will be more interested in. I don't believe Dixon was an experienced researcher. But, Cheung was/is. It is pretty common that the first author, the one who did the majority of the work, is not very experienced but that there is an experienced senior author who is advising/reviewing the work before submission. Like I said, if the CSEP believed this study had zero merit I doubt they would have chosen it for oral presentation because they have no interest in wasting the membership's time.
 
Mar 10, 2009
965
0
0
FrankDay said:
We believe the main benefit to training with PowerCranks is the training of a new, more efficient, more powerful, technique.


This is not a new technique, except for using both legs this is no different from single legged pedalling in which the objectives of the circular technique are used.
 
Sep 23, 2010
3,596
1
0
coapman said:
This is not a new technique, except for using both legs this is no different from single legged pedalling in which the objectives of the circular technique are used.
Single legged pedaling does not train the two legged coordination nor does anyone do the drill long enough to train the 5 hour endurance.
 
Mar 10, 2009
965
0
0
FrankDay said:
Single legged pedaling does not train the two legged coordination nor does anyone do the drill long enough to train the 5 hour endurance.

How does coordination increase power. How would you describe the pedalling of a PC'er on his return to standard cranks.
"In terms of fitness, the use of one leg at a time allows a greater volume of blood flow through the leg per minute (or unit of time) than happens with both legs working together. There is a great mass of muscle in both legs put together and together they can use more blood each minute than the heart can physically supply. When one leg works alone there is plenty of reserve capacity in the heart and a lot of blood can flood into the leg. This increased blood volume may increase the muscle adaptations that are one important outcome that endurance athletes require. The development of larger blood vessel networks in muscle will allow more nutrients to reach all parts of the active muscle more quickly."
 
Mar 18, 2009
2,442
0
0
FrankDay said:
1. once that decision is made they may ask that the text/data be reduced in size to fit a length constraint, unless the study is a particularly important one. ... 2. Lots of crap gets accepted for publication. ... 3. And, lots of good studies get rejected. Some of that has to do with the biases of the reviewers. Some has to do with what they think the readership will be more interested in.

1. No.
2. Yes, because authors submit crap papers to crap journals.
3. No. Reviewers are typically blinded to the authors and the institution so there are no personal biases. There are a minimum of two reviewers from different institutions, and typically three, so an individual reviewer bias against the topic or the results is negated. The paper is submitted to the appropriate journal. If an author is stupid enough to submit an exercise physiology manuscript to an engineering journal (as an extreme example) then that is the fault of the author. Papers will not be rejected if an exercise physiology paper is submitted to an exercise physiology journal because the readership is exercise physiologists.

You are again showing your ignorance of the publishing world and everything to do with scientific publications. When will you learn to give up?
 

TRENDING THREADS