- Mar 18, 2009
- 2,442
- 0
- 0
FrankDay said:First, it isn't my product so it really isn't my job to do any testing
So are you admitting it is your job to test your products and not rely on other researchers to do your job for you?
FrankDay said:First, it isn't my product so it really isn't my job to do any testing
FrankDay said:First, it isn't my product so it really isn't my job to do any testing other than what an individual might do, which I am trying to do.
I can't test product I don't have. Having the hardware was just like having an ordinary PM. I saw no issues with reliability on the road but they have changed the design from what I have so what is going to ship and what I have are two different things.sciguy said:Frank,
Since you seem to be the main if not sole promoter of the product in the USA combined with the fact that you have been highly involved in its development I'd call that a great reason for you being involved it the products testing. You've had access to prototypes for more than a year. That has placed you in a wonderful position to do some truly excellent testing but for some reason you don't seem to find it worth doing.
Hugh
Testing is not the same as doing scientific study. Scientific proof requires the researcher to be independent to help avoid bias in the results. Anything I do could rightfully be claimed to be potentially biased and, therefore, discounted.elapid said:So are you admitting it is your job to test your products and not rely on other researchers to do your job for you?
FrankDay said:Testing is not the same as doing scientific study. Scientific proof requires the researcher to be independent to help avoid bias in the results. Anything I do could rightfully be claimed to be potentially biased and, therefore, discounted.
No, I have done all the "research" I need to do for my needs, i.e., to be able to tell potential customers what they might expect from using the product as we suggest. And, of course, I believe in a PM as a measurement tool. Power is an important aspect of riding fast. My problem is PM's and the claims of their advocates is there is no scientific support for the implication that using one is the best way to train and race.elapid said:So you keep on saying and understandingly so, but expecting someone else to do your research for you when it is not one of their interests is also a copout. Furthermore, when you do your own testing and claim a 40% improvement in power (yet don't believe in a PM), then you need to be able to back this up.
FrankDay said:I personally don't really care much about how accurate the device is .
To me reliability is more important than accuracy. If my power goes up 10% does it really matter if the number I am following changes from 100 to 110 or 200 to 220? Either way I know my power has changed 10%. Hence, when I changed crank length from 145 to 130 the numbers for my typical ride changed dramatically. I could immediately associate one with the other. It is somewhat akin to putting bigger tires on your car. Immediately your speedometer is off but it is also pretty easy to learn that now when the speedometer says 50 you are really going 60. It would only be an issue if someone was driving the car that didn't know the speedometer was off. It would be better (easier) if I were able to recalibrate the cranks for each crank length because then I wouldn't have to explain this stuff to people like you but I can't so I do.sciguy said:So let me get this straight. For months you've been hammering how revolutionary you feel Icranks will be in allowing athletes to test how the crank length they use influences the power they generate while in an aero position and then you turn around and say-
You've made it very clear that you think shorter cranks will allow athletes to generate more power but you don't care if the device you're selling measures that power accurately??????
Color me confused.
Hugh
FrankDay said:Edit: one more thing, you have misinterpreted what I think about crank length. While I believe some people will be able to generate more power with shorter cranks that is not always the case. More importantly I think that most people will be able to get into improved aerodynamic positions without compromising power if they were to go to shorter cranks.
I would like to open a discussion regarding the importance of crank length to bicycle racing or cycling in general.
I have posted an "essay" of my thoughts as part of my web site here. I would be interested in constructive criticism to help me fine-tune my arguments or correct any obvious errors.
In summary I feel that shorter cranks do several things for the cyclist.
1. Shorter cranks will improve power output for most.
2. Although this goes completely against the conventional wisdom, shorter cranks can reduce knee stress
3. Shorter cranks allow better aerodynamic positioning without sacrificing power.
And, in general, we are talking substantially shorter than what most would consider to be a short crank. Our data suggests that around 100 mm crank length would be near optimum for most. I am currently riding 105 mm cranks with good feeling.
I believe there are good reasons that explain the above benefits and we can discuss them if any desire.
If possible I would like to open an actual discussion of this issue based on facts and data rather than bias and opinion.
Yes, somewhat. I am not afraid to change my view as I gather more data. I suspect that optimum for most (for aero position) is longer than the 100 mm range I noted then, more in the 120-150mm range (I am now riding 125mm cranks). Further, the power improvement that most will see is probably quite small unless they are in an aerodynamic position but in an aerodynamic position I suspect power may increase considerably in many. It is one of the things I want to try to document and, if I am correct, also see if I can answer the question, why?sciguy said:Have you changed your mind since you wrote the following?
FrankDay said:Testing is not the same as doing scientific study. Scientific proof requires the researcher to be independent to help avoid bias in the results. Anything I do could rightfully be claimed to be potentially biased and, therefore, discounted.
All new power meters come with warranties (and is the law in many countries) and if they are not measuring power to specifications, then all the power meter manufacturers will repair or replace the unit as normal.FrankDay said:Try and get that from a PM manufacturer.
Here is what you have to do to replicate my data, shouldn't be too hard.Alex Simmons/RST said:A thorough study which provides details on process and the raw data can be examined by non-biased parties, and importantly, enables such an experiment to be replicated by others in order to assess if results are repeatable.
FrankDay said:Here is what you have to do to replicate my data, shouldn't be too hard.
Get a bunch of average cyclists and have them agree to train as they are used to training but doing so on PowerCranks exclusively for 9 months while periodically testing them and see if and how they change.
Ball is in your court.
I went to a local cycling club and asked for volunteers to participate in my study. I think I had 10 sign up. Not all finished, it is quite a commitment. If I remember right, I tested them, after the first test, on PowerCranks because, if they couldn't get to the same ending HR at "exhaustion" that was evidence they were not adequately trained on PowerCranks. Of course, subsequent to that test we have had customers report similar results when testing themselves.elapid said:1. Where's your data?
2. Your methodology is poorly described. For a study to be repeatable, not only should the results be repeatable, but those results should be based on using the same methodology. What is a bunch (i.e., define your study population)? What is an average cyclist? What time interval do you define as "periodically"? How do you "test" them? How do you define change and what statistical tests did you use to analyze these changes?
Ball is only in someone else's court if there is a study to test repeatability. Ball is in your court for the initial study.
FrankDay said:I went to a local cycling club and asked for volunteers to participate in my study. I think I had 10 sign up. Not all finished, it is quite a commitment. If I remember right, I tested them, after the first test, on PowerCranks because, if they couldn't get to the same ending HR at "exhaustion" that was evidence they were not adequately trained on PowerCranks. Of course, subsequent to that test we have had customers report similar results when testing themselves.
The same methodology involves asking people to train on the cranks exclusively and testing them at monthly intervals using a step test (20 watt increments every 2 minutes) to exhaustion. In my data 40% max power improvement occurred as early as 6 months so to "replicate" my data you have to go at least that long. Post testing must be done on PowerCranks to demonstrate adequate training stimulus, as noted above. It doesn't matter what my raw data shows, all that matters is whether the outcome is similar or not.
It doesn't matter how many there were. It is what we got and subsequent data has supported that number. I am quite confident that if anyone even comes close to replicating what was done with a reasonable number of people that the results will be similar. If you do this study it isn't important what results I got, all that counts are the results you get.elapid said:Thanks for the update. But there are still too many holes. You think 10, but how many actually was it? Some dropped out, but how many is "some"? How many were excluded because of so-called inadequate training? Out of a possible 10, your sample size is looking awfully small with dropouts and exclusions. That's one immediate criticism.
Yes, a good study should have a control group but I wasn't doing a study. I was simply trying to gather data so I could tell people what they might expect if they purchased my product. One good thing about including a control group is one might be able to say PC group increased 40% and control group increased 20% therefore, PC's are worth an extra 20%. Instead, we are stuck lumping all the reasons a person might improve together. Anyhow, good luck with that. It is hard enough to get people to commit to a 6 week study. Try your luck getting commitments to a 6 month study.You should also have a control group - a similar group and number of cyclists of similar "average" standard with a similar training program and followup testing, but on "standard" cranks. No use saying PowerCranks result in a 40% increase in max power when you don't know if those that use standard cranks would or would not get a similar increase in max power over the same time period with the same training regimen.
My guess is that since you have characterized 40% as bull**** that a statistical analysis of a 40% change, even in a small population, when an expected improvement from training effect might be about 10% will result in a P<.01. Anyhow, you are welcome to do a statistical analysis of your large pool data when you get it. Good luck, have fun.Raw data is raw data, but it needs to be analyzed statistically. Statistics can always be criticized, but there is no validity to a study that does not include sufficient population size and adequate statistical analysis.
FrankDay said:It doesn't matter how many there were. It is what we got and subsequent data has supported that number. I am quite confident that if anyone even comes close to replicating what was done with a reasonable number of people that the results will be similar. If you do this study it isn't important what results I got, all that counts are the results you get.Yes, a good study should have a control group but I wasn't doing a study. I was simply trying to gather data so I could tell people what they might expect if they purchased my product. One good thing about including a control group is one might be able to say PC group increased 40% and control group increased 20% therefore, PC's are worth an extra 20%. Instead, we are stuck lumping all the reasons a person might improve together. Anyhow, good luck with that. It is hard enough to get people to commit to a 6 week study. Try your luck getting commitments to a 6 month study.My guess is that since you have characterized 40% as bull**** that a statistical analysis of a 40% change, even in a small population, when an expected improvement from training effect might be about 10% will result in a P<.01. Anyhow, you are welcome to do a statistical analysis of your large pool data when you get it. Good luck, have fun.
Large sample sizes are necessary to show small differences. They are not to show large differences. I suspect you consider a 40% improvement a "large difference" since you call if bull****.elapid said:Sample size does matter. If there are only two cyclists left in your so-called study, then they may be both outliers. The bigger the sample size, the more robust your findings.
No, the object of further study is to confirm or refute prior studies. What is important is whether you follow the same protocol. Regardless of what Fergie thinks a study lasting 5-6 weeks of part-time intervention cannot refute data coming from 6-9 months of exclusive intervention. If your data shows no difference then that really puts my data into question. If your data supports my data then one might conclude I was right all along. If your data is somewhere in the middle then maybe a third person needs to do the study to help decide where the answer really is. And, ugh, you know my results, about 40% power improvement using step test to exhaustion after 6-9 months of training on the device. Your getting the raw data doesn't change that (I am not even sure I know where it is now.)What results you get does matter, because the aim of any further studies are to validate your results. How can your results, and hence your impartiality, be validated if we don't know your results?
No.If it is hard to get people to commit to testing PowerCranks, then doesn't that tell you something about your product?
If you say so.Ball is in your court again. At this stage you do not have a study for someone to replicate.
elapid said:Sample size does matter. If there are only two cyclists left in your so-called study, then they may be both outliers. The bigger the sample size, the more robust your findings.
What results you get does matter, because the aim of any further studies are to validate your results. How can your results, and hence your impartiality, be validated if we don't know your results?
If it is hard to get people to commit to testing PowerCranks, then doesn't that tell you something about your product?
Ball is in your court again. At this stage you do not have a study for someone to replicate.
FrankDay said:Large sample sizes are necessary to show small differences. They are not to show large differences. I suspect you consider a 40% improvement a "large difference" since you call if bull****.No, the object of further study is to confirm or refute prior studies. What is important is whether you follow the same protocol. Regardless of what Fergie thinks a study lasting 5-6 weeks of part-time intervention cannot refute data coming from 6-9 months of exclusive intervention. If your data shows no difference then that really puts my data into question. If your data supports my data then one might conclude I was right all along. If your data is somewhere in the middle then maybe a third person needs to do the study to help decide where the answer really is. And, ugh, you know my results, about 40% power improvement using step test to exhaustion after 6-9 months of training on the device. Your getting the raw data doesn't change that (I am not even sure I know where it is now.)No.If you say so.
FrankDay said:Here is what you have to do to replicate my data, shouldn't be too hard.
Get a bunch of average cyclists and have them agree to train as they are used to training but doing so on PowerCranks exclusively for 9 months while periodically testing them and see if and how they change.
Ball is in your court.
As I stated, there was no control group. But, our expectation was that it was unlikely that any group of experienced cyclists would see a 40% increase in one season of ordinary training so it seems reasonable for us to make a positive claim. If we only saw a 5% change one would have to wonder if it was PC's or training effect. But, it wasn't 5%, it was 40%. Our claim is what we observed after people trained on the device for a long period of time.Alex Simmons/RST said:How did the control group with the same characteristics perform?
coapman said:You stated in an earlier PC thread, if a rider is already using the perfected circular technique he will not get any power gains from PC's, is that still the case
Well, improvement is always possible but I don't think that any improvement being seen would be technique related.coapman said:Still waiting for an answer.
