The Powercrank Thread

Page 25 - Get up to date with the latest news, scores & standings from the Cycling News Community.
Mar 10, 2009
2,973
5
11,485
CoachFergie said:
That has to be the stupidest thing I have seen Frank write.

That would take some doing. I don't think it's stupid per se. Once might be considered an honest mistake.

What's stupid is he's repeated the same mistake so many times since 2006.
 
Sep 23, 2010
3,596
1
0
Alex Simmons/RST said:
Here's a study over a similar time frame that did have a control group. And got published.

http://journals.humankinetics.com/i...ng-with-uncoupled-cranks-in-trained-cyclists-

What do we see? No difference when compared with a control group using normal cranks.

Mind you, it's all deja vu given this was posted on page one of this thread.
LOL. Let's compare the two studies. One (Dixon) used Powercranks immersion training minimum 8 hours per week for 6 weeks. 48 hours of intervention. The other (Burns) involved only 5 weeks of training, the number of hours per week were not specified but were probably much fewer than 8 since Burns stated in his thesis the stimulus of his subjects was similar to Luttrell (3 hours a week). It is true that Burns did not find a statistically significant difference between groups but a simple look at the data shows a positive trend for the PowerCranks group such that one could assume that statistical significance would likely be reached had the test last longer, perhaps just one more week would have done it. Check out figure 7 of Burns masters thesis. Further, Dixon specified 20% of the time was anaerobic.

I think it is clear Dixon involved a stronger intervention for a longer period of time than Burns. For Burns to negate Dixon it must be an equivalent study. It is not. We have always contended that most users are just beginning to see improvement in 6 weeks. Why anyone would think a study lasting 5 weeks of unknown intensity would be demonstrative of anything in this regards boggles the mind but you hold on tight to this fantasy. Dixon still counts as proving the null hypothesis.
 
Mar 10, 2009
2,973
5
11,485
FrankDay said:
LOL. Let's compare the two studies. One (Dixon) used Powercranks immersion training minimum 8 hours per week for 6 weeks. 48 hours of intervention. The other (Burns) involved only 5 weeks of training, the number of hours per week were not specified but were probably much fewer than 8 since Burns stated in his thesis the stimulus of his subjects was similar to Luttrell (3 hours a week). It is true that Burns did not find a statistically significant difference between groups but a simple look at the data shows a positive trend for the PowerCranks group such that one could assume that statistical significance would likely be reached had the test last longer, perhaps just one more week would have done it. Check out figure 7 of Burns masters thesis. Further, Dixon specified 20% of the time was anaerobic.

I think it is clear Dixon involved a stronger intervention for a longer period of time than Burns. For Burns to negate Dixon it must be an equivalent study. It is not. We have always contended that most users are just beginning to see improvement in 6 weeks. Why anyone would think a study lasting 5 weeks of unknown intensity would be demonstrative of anything in this regards boggles the mind but you hold on tight to this fantasy.

Where is the Dixon control group data Frank? Or are you going to keep lying to people about it?

Let me get this straight. You are ready to claim the outcome of a non-control non-published study with a 6-week intervention, but dismiss the study with controls using a 5-week intervention which was published, because it wasn't long enough. Just one more week eh?

Like I said, deluded.


FrankDay said:
Dixon still counts as proving the null hypothesis.
Ah, cool. So Powercranks don't work. It's about time you owned up to that.
 
Sep 23, 2010
3,596
1
0
Alex Simmons/RST said:
Woah - what? The P values are simply from comparison of before and after training results with the null hypothesis is that training does not work.
Really? What does it mean to have a p-value anyhow http://en.m.wikipedia.org/wiki/P-value Seems it might have something to do with the null hypothesis and more than simply comparing before and after training results.
In statistical significance testing, the p-value is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.[1][2] A researcher will often "reject the null hypothesis" when the p-value turns out to be less than a predetermined significance level, often 0.05[3][4] or 0.01. Such a result indicates that the observed result would be highly unlikely under the null hypothesis. Many common statistical tests, such as chi-squared tests or Student's t-test, produce test statistics which can be interpreted using p-values.

In a statistical test, sample results are compared to possible population conditions by way of two competing hypotheses: the null hypothesis is a neutral or "uninteresting" statement about a population, such as "no change" in the value of a parameter from a previous known value or "no difference" between two groups; the other, the alternative (or research) hypothesis is the "interesting" statement that the person performing the test would like to conclude if the data will allow it. The p-value is the probability of obtaining the observed sample results (or a more extreme result) when the null hypothesis is actually true. If this p-value is very small, usually less than or equal to a threshold value previously chosen called the significance level (traditionally 5% or 1% [5]), it suggests that the observed data is inconsistent with the assumption that the null hypothesis is true, and thus that hypothesis must be rejected and the other hypothesis accepted as true.
They do not tell us anything about the role Powercranks play. To get a p value for Powercrank's influence on outcomes you need to compare them with a control group on regular cranks.



But since you claim there was a control group, then please do share the full study so we can review the data properly.
there was a control group, the participants acted as their own controls. A valid study design that you don't understand
Oh wait, it never got published. I wonder why? Maybe the CSEP wised up.
lots of valid studies never get published. That is what poster sessions etc are for. Dixon's study was chosen for oral presentation usually considered as more prestigious than poster session
OK, then post the control group data.

Oh wait, you can't give us the control group data can you Frank? That's because there was no control group.
yes there was or there could not be a statistical analysis re the null hypothesis
Frank, you know this, it's been pointed out to you time and again for the past 8 years and yet you still lie about it. Please stop.
it has been pointed out to you several times you don't know what you are talking about. The CSEP accepted this paper for oral presentation at their annual meeting. Take up you claims of incompetence to them. See how far you get.
You can't presume anything of the sort from this study. To do so would be delusional.
sure you can. Dixon's data rejected the null hypothesis. You can, I guess, hope that the 1 in 100, or so, chance it is wrong falls on your side when the study is repeated or a better study is done.
It's the same logical fallacy as follows:
The team gained an average 20% improvement in power by training on my brand of blue coloured bikes. I presume part of the reason they gained power was because of the bike colour.
did you control for everything but bike color in your study? If not then your conclusion is invalid. Dixon did control for training effect. His design was such that he expected to see no change or a drop in power in the participants. The fact he saw increases led to his statistically significant results. It is a valid study design. The fact you don't understand it is more your problem than Dixon's or the CSEP.
 
Sep 23, 2010
3,596
1
0
Alex Simmons/RST said:
Where is the Dixon control group data Frank? Or are you going to keep lying to people about it?
Trying to explain Dixon's design to you is like trying to educate my wife about math. I don't have the data. I don't need it. The CSEP saw it and accepted it. That doesn't mean everything is perfect but it is better than nothing. I am forced to accept their calculations just as all of us are. The fact that you do not like a study does not, per se, invalidate it.
 
Apr 21, 2009
3,095
0
13,480
Alex Simmons/RST said:
That would take some doing. I don't think it's stupid per se. Once might be considered an honest mistake.

What's stupid is he's repeated the same mistake so many times since 2006.

I can't seriously believe anyone could not understand finding p-values for pre-intervention and post not requiring a control. Bro-science!
 
Jun 1, 2014
385
0
0
FrankDay said:
Trying to explain Dixon's design to you is like trying to educate my wife about math. I don't have the data. I don't need it. The CSEP saw it and accepted it. That doesn't mean everything is perfect but it is better than nothing. I am forced to accept their calculations just as all of us are. The fact that you do not like a study does not, per se, invalidate it.

If the data is flawed, how is it better than nothing? I would think you would want all the data from a study that shows a benefit for your product. You can't seriously expect people to take it simply based on it being accepted as a presentation?? You aren't forced to accept anything, you can check the facts yourself of you really wanted to prove your cranks worked.
 
Jun 1, 2014
385
0
0
FrankDay said:
Sure they have "made such claims regarding power meters. A few examples:
What Amazon says in trying to sell the book Training and Racing with a Power Meter:There is zero scientific support for the highlighted segments. At least I am glad to see Coggan supports the use of "case studies" (anecdotes) when it suits his purpose.

Or, this is what Joe Friel says: Again, zero scientific support for the highlighted statement.

Or this: Zero scientific support.

Or this: Zero scientific support for those statements.

People have just stopped making such claims here because they know such claims have no scientific basis and they will be asked (by me) to support them with facts. In fact, attempts to scientifically demonstrate a benefit to using a power meter (including one done by our own coachfergie) have never shown even any hope of there being a benefit.Your point? Are you trying to say there should be no speed benefit to using a fairing over the wheels? Your objection is to the size of the improvement noted. I would agree that is most likely in error due to a poor test design. But, they weren't trying to do a scientific study. What they did qualifies as an anecdotal report, which can be good or not so good. In this case it is probably not so good but it doesn't mean that the product probably does nothing. Fairings are not new technology and adding a fairing to a bike would be expected to increase the speed for any given power, headwind or not. Only question is how much. And, further, it doesn't really matter, as no one here would by one anyhow, regardless of how good it was, because it is illegal for racing.I disagree. What they should have said, if they were doing this as a scientific project was: "Fairings should improve speed on a bicycle but it is very difficult to control for all the variables to measure this effect on an open road. Our data suggests there is a positive effect from this device but we are unable to accurately quantify it."

Interesting to note that none of your quotes about power meters made any claims of how much you would improve. They discussed making better training choices that might lead to improvements. Can you not see the difference between that and your product claims?

If I start a training log website, do I need a scientific study to say it can help athletes and coaches track training and make decisions to help improve performance? If I say that my software has been proven to improve power by 40%, I would need to show that with evidence.
 
Sep 23, 2010
3,596
1
0
JamesCun said:
If the data is flawed, how is it better than nothing? I would think you would want all the data from a study that shows a benefit for your product. You can't seriously expect people to take it simply based on it being accepted as a presentation?? You aren't forced to accept anything, you can check the facts yourself of you really wanted to prove your cranks worked.
That is why statistics is necessary to help the researcher/observer determine if the data is flawed or not.
Statistics is the study of the collection, organization, analysis, interpretation and presentation of data.…
Interpretation of statistical information can often involve the development of a null hypothesis in that the assumption is that whatever is proposed as a cause has no effect on the variable being measured.

The best illustration for a novice is the predicament encountered by a jury trial. The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of suspicion of the guilt. The H0 (status quo) stands in opposition to H1 and is maintained unless H1 is supported by evidence "beyond a reasonable doubt". However, "failure to reject H0" in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarily accept H0 but fails to reject H0. While one can not "prove" a null hypothesis, one can test how close it is to being true with a power test, which tests for type II errors.
What I would like and what I get are two different things frequently. I suspect the same goes for you. I try to do the best I can with what is available to me. Anyhow, sure you can expect people to accept it based upon "a presentation" since it was "a presentation" at a national meeting where all of the data was available as was the researcher to be asked questions. What we are seeing is simply the summary of that presentation/study. The fact that you don't like it is of little concern as the statistics that were done indicate that there is only a tiny chance (about 1 in 100) that this data is flawed. Like I said, that is why statistics are necessary. Without statistics how do you answer the statement: "This was the coldest summer my town has ever seen therefore global warming is BS"? Or, "the Dixon data is flawed." You answer both of those questions by pointing at the statistics. Dixon's data is unlikely to be flawed. The null hypothesis was rejected on two different counts.
 
Sep 23, 2010
3,596
1
0
JamesCun said:
Interesting to note that none of your quotes about power meters made any claims of how much you would improve. They discussed making better training choices that might lead to improvements. Can you not see the difference between that and your product claims?
Yes I can. Better means, ahem, BETTER. That fact that power meter advocates fail to quantify how much better suggests that they don't know so they are either lying or guessing because no betterment has ever been shown by anyone in a scientific analysis. At least with PowerCranks we have been able to actually measure how much better we expect the average user to improve and we state that (then we give them a 3 month money-back guarantee (no power meter does that) plus there are actual scientific studies that have shown benefits to the device (although none have documented our 6-9 month claims).
If I start a training log website, do I need a scientific study to say it can help athletes and coaches track training and make decisions to help improve performance? If I say that my software has been proven to improve power by 40%, I would need to show that with evidence.
If you say it has been proven then I would expect you to back that up with the evidence that it has been proven. If you say your expectation is such then you do not need to back it up. What we make is a marketing claim to tell people what we think they might expect should they choose to try our product then we back it up with a 90 day money-back guarantee should it not work out for them. Many users continue to report such numbers to us so we still think the number to be pretty good for most new users, if they use them as we suggest. While scientific proof of what we think would be nice it would be so difficult to do that we will have to settle for smaller studies that suggest there is something to our total claim and our money-back guarantee.

Marketing claims are full of BS such as "4 out of 5 dentists surveyed…" (all yu need is one survey), "increases your pleasure… (how do you prove that?), etc. At least we back our claim up with a substantial guarantee.

I know that won't be enough for you but that is the way it is.
 
Jun 1, 2014
385
0
0
FrankDay said:
That is why statistics is necessary to help the researcher/observer determine if the data is flawed or not.
What I would like and what I get are two different things frequently. I suspect the same goes for you. I try to do the best I can with what is available to me. Anyhow, sure you can expect people to accept it based upon "a presentation" since it was "a presentation" at a national meeting where all of the data was available as was the researcher to be asked questions. What we are seeing is simply the summary of that presentation/study. The fact that you don't like it is of little concern as the statistics that were done indicate that there is only a tiny chance (about 1 in 100) that this data is flawed. Like I said, that is why statistics are necessary. Without statistics how do you answer the statement: "This was the coldest summer my town has ever seen therefore global warming is BS"? Or, "the Dixon data is flawed." You answer both of those questions by pointing at the statistics. Dixon's data is unlikely to be flawed. The null hypothesis was rejected on two different counts.

Correct, no one has every used statistics to give multiple interpretations of the same data. We always take the statistics at face value. And, if you ask the wrong question, even correct statistics will still give you the wrong answer.

Why don't you simply repeat the Dixon 'experiment' and reproduce the same results? Would be easy and would confirm how beneficial PC are. You could even add a control group training side by side with normal cranks. I'm sure it's common to see a jump in vo2 from 58 to 67(or whatever the numbers) for 'trained cyclists'.
 
Sep 23, 2010
3,596
1
0
JamesCun said:
Why don't you simply repeat the Dixon 'experiment' and reproduce the same results? Would be easy and would confirm how beneficial PC are.
Because, as I have said before, if I were to do it it would inject the easy criticism of injected bias by the researcher. It would be rejected by essentially everyone from the get-go regardless of how well it was done. True scientific proof comes from independent researchers doing the work.
 
Mar 18, 2009
2,553
0
0
CoachFergie said:
That has to be the stupidest thing I have seen Frank write.

Of that I am not so sure. To his credit, though, he has made that claim many times before.
 
Apr 21, 2009
3,095
0
13,480
acoggan said:
Of that I am not so sure. To his credit, though, he has made that claim many times before.

Referring to the suggestion that there had to be a separate control group or they could not have determined the p values.

But, yes, Frank has said some pretty stupid stuff over the years. Performance artist indeed.
 
Mar 10, 2009
965
0
0
FrankDay said:
Because, as I have said before, if I were to do it it would inject the easy criticism of injected bias by the researcher. It would be rejected by essentially everyone from the get-go regardless of how well it was done. True scientific proof comes from independent researchers doing the work.

What can PC's do for the leg muscles used in the circular or ankling techniques that leg extension/curls gym equipment can't do ?
 
Apr 21, 2009
3,095
0
13,480
coapman said:
What can PC's do for the leg muscles used in the circular or ankling techniques that leg extension/curls gym equipment can't do ?

Come on Noel, lift your game. You're just encouraging Frank to make more unsubstantiated claims. And while this may be damned amusing it doesn't take us any further. A lot like your lack of evidence for any of your claims despite the technology being available to assess them having been round for the last 40 years.
 
Nov 25, 2010
1,175
68
10,580
coapman said:
What can PC's do for the leg muscles used in the circular or ankling techniques that leg extension/curls gym equipment can't do ?
=================
If you are only interested in 'the leg muscles', then probably little.
But for actual cycling performance, timing and coordination would be a starting point.

Jay Kosta
Endwell NY USA
 
Mar 10, 2009
965
0
0
JayKosta said:
=================
If you are only interested in 'the leg muscles', then probably little.
But for actual cycling performance, timing and coordination would be a starting point.

Jay Kosta
Endwell NY USA

Where exactly in the pedalling circle is this timing being used.
 
Apr 21, 2009
3,095
0
13,480
JayKosta said:
=================
If you are only interested in 'the leg muscles', then probably little.
But for actual cycling performance, timing and coordination would be a starting point.

Isn't that why we train. And then why we train specifically. On courses similar to where we will compete, try and find races that are similar to our goal races, use similar gears, inclines, track surfaces, competitive environments etc? So we learn the timing and coordination of muscular effort over appropriate duration to be competitive. I suspect this is why Frank sees powercrank use harm performance in Ironman athletes because they are performing non-specific training.
 
Mar 10, 2009
2,973
5
11,485
FrankDay said:
Really? What does it mean to have a p-value anyhow http://en.m.wikipedia.org/wiki/P-value Seems it might have something to do with the null hypothesis and more than simply comparing before and after training results.
there was a control group, the participants acted as their own controls. A valid study design that you don't understand
lots of valid studies never get published. That is what poster sessions etc are for. Dixon's study was chosen for oral presentation usually considered as more prestigious than poster session
yes there was or there could not be a statistical analysis re the null hypothesis
it has been pointed out to you several times you don't know what you are talking about. The CSEP accepted this paper for oral presentation at their annual meeting. Take up you claims of incompetence to them. See how far you get.

Frank, stop. You are making an even bigger fool of yourself than you already have.

To have acted as their own control and for p-values to be calculated based on such a control, then the same riders would also have needed to perform the same training on regular cranks under the same conditions, from the same level of training/fitness etc.

Yet this data does not exist.

Hence p-values were not calculated based on comparison with a control. It's simply before and after results v null hypothesis (i.e. that training would not have an impact).

FrankDay said:
did you control for everything but bike color in your study? If not then your conclusion is invalid.
Ah, so you've spotted the flaw. So why can't you see that same problem with your interpretation of the Dixon study data?


FrankDay said:
Dixon did control for training effect.
They measured training impact of the intervention but there was no control. If there was a control, then control data would have been shown.

Since you are convinced there is control data, then show us the data.


FrankDay said:
Dixon did control for training effect. His design was such that he expected to see no change or a drop in power in the participants. The fact he saw increases led to his statistically significant results. It is a valid study design. The fact you don't understand it is more your problem than Dixon's or the CSEP.
All that study says is that some guys trained and their VO2max went up. Woopdie do. Incredible insight gained there. :rolleyes:

What it does not, and cannot say is whether the use of your cranks had anything to do with it.

To do that requires a control using regular cranks, be it the same riders at another time performing the same training under same conditions with same starting baseline, or via the use of another control group.

Without controlling for crank use, it tells us nothing about the impact of the cranks. Claiming any impact of use of your cranks is just as invalid as claiming the results were because all of their bikes were blue.
 
Mar 12, 2009
553
0
0
^ where can I get one of these blue bikes?? Is there a full refund guarantee if I fail to see improvement after 9 months of exclusive blue bike use?
 
Mar 10, 2009
2,973
5
11,485
Tapeworm said:
^ where can I get one of these blue bikes?? Is there a full refund guarantee if I fail to see improvement after 9 months of exclusive blue bike use?

Will a voucher for a can of spray paint do? :)

We have a range of performance enhancing colours. Our testing has shown all our colours work to some degree. We firmly believe there are excellent benefits in combining colours but are not publishing our results on optimal combinations.
 
Nov 25, 2010
1,175
68
10,580
CoachFergie said:
...
I suspect this is why Frank sees powercrank use harm performance in Ironman athletes because they are performing non-specific training.
========================================
CoachFergie,

Is the above comment about 'PCs harming performance when regular cranks are used', that you mention quite often?

I don't recall Frank actually saying that .... , only that power when tested on PC is higher than when tested on regular cranks, BUT that the power on regular cranks IS an improvement over pre-PC training (which I agree can't be DIRECTLY attributed to the PC training, but 'is what it is').

Jay Kosta
Endwell NY USA
 
Sep 23, 2010
3,596
1
0
JayKosta said:
coapman said:
What can PC's do for the leg muscles used in the circular or ankling techniques that leg extension/curls gym equipment can't do ?
=================
If you are only interested in 'the leg muscles', then probably little.
But for actual cycling performance, timing and coordination would be a starting point.

Jay Kosta
Endwell NY USA[/QUOTE]
Actually, if you are only interested in 'the leg muscles' then they still do a lot. Try doing 5,000 reps an hour, for x hours, of extensions/curls in a gym.

So, you using PowerCranks one is training the muscles not only for a better coordination but also the endurance to maintain that coordination for as many hours as necessary for the kind of racing one does.
 
Sep 23, 2010
3,596
1
0
JayKosta said:
========================================
CoachFergie,

Is the above comment about 'PCs harming performance when regular cranks are used', that you mention quite often?

I don't recall Frank actually saying that .... , only that power when tested on PC is higher than when tested on regular cranks, BUT that the power on regular cranks IS an improvement over pre-PC training (which I agree can't be DIRECTLY attributed to the PC training, but 'is what it is').

Jay Kosta
Endwell NY USA
You are correct Jay. Fergie likes to misrepresent what is said by me so he can continue to make negative comments that have no basis.

The whole idea is to train onself to use a more efficient and more powerful coordination when on the bicycle. This takes a lot of time to change what one has been doing for years to something else that they do without thinking for hours. The PowerCranks force the needed coordination. In the beginning, when the new muscles being trained are inadequate the PowerCranks slow the rider down. But, as soon as they start to develop this basic coordination and reasonable endurance (about 6 weeks in most) they start seeing speed improvement. But, in these early stages the new coordination isn't ingrained and the endurance is pretty marginal. So, when a rider goes back to regular cranks they will partially revert back to their old, less efficient and less powerful, coordination, especially after they start to get tired. This results in a loss of power compared to when they are on the PowerCranks but it is still more powerful than what they were doing before. Fergie likes to jump on the "Franks says people lose power when they go back to regular cranks" as if that is a bad thing. It isn't because all it means is they have more work to do regarding endurance in the new muscles as the goal is to get to the point there is no drop in power when they do back to regular cranks, and that takes a lot of time for most. If one is properly and completely PowerCranks trained it shouldn't matter what cranks they race on as the power should be the same on PC's or PowerCranks.