New study shows leg flexion less efficient than extension.

Page 14 - Get up to date with the latest news, scores & standings from the Cycling News Community.
Apr 21, 2009
3,095
0
13,480
Wikipedia you linked! Not quite the same as journal article or academic text. A real scientist would understand that.
 
Sep 23, 2010
3,596
1
0
Re:

CoachFergie said:
Wikipedia you linked! Not quite the same as journal article or academic text. A real scientist would understand that.
Well, it would seem all you would have to do would be to link to a better, more correct source, and point out specific errors I might have made which I noticed you failed to do. LOL.
 
Mar 10, 2009
2,973
5
11,485
Re: Re:

FrankDay said:
Alex Simmons/RST said:
Like I said before, it's just more statistical comedy.

Frank, you can't (legitimately) use p values in the manner you have. It's no more complex than that.

Doing so demonstrates that you either don't understand this, or if you do, that you are deliberately attempting to mislead others.
Well, according to the article I linked it is preferrable to tell people what the P value is and let them draw their own conclusion as to the worth of the data than to draw some arbitrary boundary that is crossed or not to define significance. You are the folks that are misusing this data trying to imply these studies mean more than they do.
I agree that providing the p value is a good idea as well as the data so people can draw their own conclusions.

But what you can't do is suggest the p value tells us what you say it does. You are either making a fundamental mistake in interpreting the p value, or you are are being deliberately misleading. Which is it Frank?

Here's a neat way of showing this common mistake:
http://www.graphpad.com/guides/prism/6/statistics/index.htm?interpreting_a_large_p_value_from_an_unpaired_t_test.htm
 
Jul 25, 2012
12,967
1,970
25,680
Re: Re:

King Boonen said:
1. Actually Frank that is exactly what you have done by attempting to apply a continuous probability to a p-value and it is complete and total rubbish. A p-value is a yes/no statistic.
FrankDay said:
Phooey! https://en.wikipedia.org/wiki/P-value
Before the test is performed, a threshold value is chosen, called the significance level of the test, traditionally 5% or 1%...An equivalent interpretation is that p-value is the probability of obtaining the observed sample results,

You cleary do not understand what you are talking about Frank or you are trolling and misdirecting, I'm going with the second two options otherwise you wouldn't have replied. You have applied a CONTINUOUS PROBABILITY to a discrete statistic. To do this with a p-value you would need to have 100 hypotheses and apply a separate limit to each one and it is complete and utter junk.

Your edit is trolling and will be reported as such, as you have left out the fact that this is applicable when THE NULL HYPOTHESIS IS TRUE (i.e. there IS NO SIGNIFICANT DIFFERENCE), it has nothing to do with the interpretation of the data that you are attempting to apply it to. The worst thing is you have quoted the wikipedia page verbatim in your next post, which includes this:

wikipedia said:
The p-value is not the probability that the null hypothesis is true or the probability that the alternative hypothesis is false. It is not connected to either. In fact, frequentist statistics does not and cannot attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero and the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability that would explain the results more easily), Lindley's paradox. There are also a priori probability distributions bin which the posterior probability and the p-value have similar or equal values.[18]

It's stated right there, in the very first sentence of point one of common misunderstandings, you'll notice I've not tried to edit any of it as I don't have to. You have the gall to quote something you clearly do not understand and then suggest others should read it, maybe you should.

King Boonen said:
2. No, it does not. Again you attempting to apply a linear scale of significance to a p-value and that cannot be done. All it allows you to do is reject the null hypothesis if it is less than the chosen level of significance, it is not in any way related to "how sure" they can be of the results as it is purely a statistical calculation and may not relate to a real difference anyway. This is a version of the prosecutors fallacy.
FrankDay said:
Again, from the article.
An equivalent interpretation is that p-value is the probability of obtaining the observed sample results,

Again, this is trolling, refer to my previous point. you have taken a section of a webpage, edited it and applied it to something it is not referring to.

King Boonen said:
3. Standard practice when the results do not allow you to reject the null hypothesis is to consider increasing sample size or redesign your experiment but a p-value will give you no indication of whether this will be successful as IT IS NOT A CONTINUOUS STATISTIC. Repeating the exact same experiment is extremely unlikely to give the same p-value.
FrankDay said:
I agree that repeating the exact same study is unlikely to give the exact same result and that the p-value gives no indication, in and of itself, as to whether changing the study design might affect the outcome but if one can, from their education and experience and from the hypothesis they are trying to test discern a reason as to why the study didn't reach the arbitrary significance level then one might be able to redesign the study to see if they are correct or not.

You even state it here, right there in your own sentence. I've even made it bold for you. If the p-value cannot given any indication as to whether an experimental design change will change a result to be significant, IT CANNOT BE RELATED TO THE PERCENTAGE CHANCE OF ACHIEVING A RESULT, by the very definition.

FrankDay said:
Failure of a study to reach the arbitrary significance level is not evidence that the hypothesis is incorrect per se, only that the study as completed did not demonstrate the difference required by the arbitrary choice of the study design. But, if a trend is seen in the data then it is reasonable to look to see if a different design (more subjects, more time, etc) might uncover the "truth". It is why when studies are published they also include the methods and the raw data so others might see errors in the design or interpretation that might lead to better follow on studies. If all we got was "I studied this and found no difference" what does that mean?

It means that based on the statistics used and the limits applied THERE IS NO SIGNIFICANT DIFFERENCE IN THE DATA. You can look at the data and interpet it in any way you want, but you cannot attempt to use the p-values in any other way than a discrete, yes/no statistic. That is what a p-value is. It is pretty much possible to get any answer you want out of a data set by over-fitting your statistics, that's why researchers decide up front what statistics they will use and what their confidence limits will be. You should know this if you had ever actually been involved in any research.

FrankDay said:
The most difficult part of doing a study is the interpretation of the data. Simply looking at whether a study reaches the arbitrary statistical significance cut-off level as the only indicator of the studies worth is the lazy way out.

No it is not. It is applying the statistics chosen at the start of the experiment to test the hypothesis put forward. There are methods you can use if you want a continuous intepretation of your data, I even pointed that out in my reply to you (Bayesian Statistics) but you have completely ignored that as I'm guessing you do not even know what it is. You are attempting to cherry pick results based on a complete misunderstanding (which I believe is purposeful) of how the statistics applied can be intepreted. I see this constantly in poorly wrriten papers and reports that get rejected. The p-value is used because it is very simple to calculate, but it is one of the most misundertood and misintepreted statistics used because of this. You cannot intepret it in the way you are attempting to do.
 
Jul 25, 2012
12,967
1,970
25,680
Re:

Alex Simmons/RST said:
Like I said before, it's just more statistical comedy.

Frank, you can't (legitimately) use p values in the manner you have. It's no more complex than that.

Doing so demonstrates that you either don't understand this, or if you do, that you are deliberately attempting to mislead others.

The best thing is he quotes a wikipedia article verbatim, that actually tells him this, and suggests others should read up on it...
 
Nov 25, 2010
1,175
68
10,580
The 'null hypothesis' is that 'the results are a matter of chance'.
The p-value CAN be used as the probability that the null hypothesis is true - but it doesn't mean anything except about the null hypothesis.

So, a value of 0.05 chosen to indicate 'statistical significance' means that having a probability of less than 5%
THAT THE NULL HYPOTHESIS IS TRUE
was deemed sufficient to indicate that the results WERE NOT DUE TO CHANCE (i.e. being significant).
But it doesn't say anything about what actually DID cause the results. Only that something other than CHANCE is involved.
Edit June 26, 2015 15:30 UTC - should have said "Only that something other than CHANCE is likely to be involved".

Jay Kosta
Endwell NY USA
 
Sep 23, 2010
3,596
1
0
Re:

JayKosta said:
The 'null hypothesis' is that 'the results are a matter of chance'.
The p-value CAN be used as the probability that the null hypothesis is true - but it doesn't mean anything except about the null hypothesis.

So, a value of 0.05 chosen to indicate 'statistical significance' means that having a probability of less than 5%
THAT THE NULL HYPOTHESIS IS TRUE
was deemed sufficient to indicate that the results WERE NOT DUE TO CHANCE (i.e. being significant).
But it doesn't say anything about what actually DID cause the results. Only that something other than CHANCE is involved.

Jay Kosta
Endwell NY USA
Actually, choosing 0.05 does nothing to eliminate chance. All being less than 0.05 says is the chance of the results being due to chance is less than 1 in 20 (or, being less than 0.01 means less than 1 in 100). Chance in the results is never totally eliminated. That is why it is so "stupid" in studies such as these to have this black and white cut-off where data that has a 85%, 90%, or 92% chance of not being due to chance is deemed to have failed while data that has a 95% chance of being not due to chance is deemed to have been "proven" true. But, that is how people who do not understand this stuff (many here) use these statistics. The whole idea of doing statistics is to help us make sense of sometimes very confusing data. The p value helps us to get a sense as to how good the "difference" being looked at is. Trying to turn it into an either/or (either it passes or not) situation using some arbitrary cut-off does a disservice to the study.
 
Jun 1, 2014
385
0
0
Re: Re:

FrankDay said:
JayKosta said:
The 'null hypothesis' is that 'the results are a matter of chance'.
The p-value CAN be used as the probability that the null hypothesis is true - but it doesn't mean anything except about the null hypothesis.

So, a value of 0.05 chosen to indicate 'statistical significance' means that having a probability of less than 5%
THAT THE NULL HYPOTHESIS IS TRUE
was deemed sufficient to indicate that the results WERE NOT DUE TO CHANCE (i.e. being significant).
But it doesn't say anything about what actually DID cause the results. Only that something other than CHANCE is involved.

Jay Kosta
Endwell NY USA
Actually, choosing 0.05 does nothing to eliminate chance. All being less than 0.05 says is the chance of the results being due to chance is less than 1 in 20 (or, being less than 0.01 means less than 1 in 100). Chance in the results is never totally eliminated. That is why it is so "stupid" in studies such as these to have this black and white cut-off where data that has a 85%, 90%, or 92% chance of not being due to chance is deemed to have failed while data that has a 95% chance of being not due to chance is deemed to have been "proven" true. But, that is how people who do not understand this stuff (many here) use these statistics. The whole idea of doing statistics is to help us make sense of sometimes very confusing data. The p value helps us to get a sense as to how good the "difference" being looked at is. Trying to turn it into an either/or (either it passes or not) situation using some arbitrary cut-off does a disservice to the study.

Why don't you educate us on the history of p-values and why 5 and 1% are fairly standard. Please do this with you own writing, not some wiki link that you don't understand. Thanks.
 
Sep 23, 2010
3,596
1
0
Re: Re:

King Boonen said:
King Boonen said:
1. Actually Frank that is exactly what you have done by attempting to apply a continuous probability to a p-value and it is complete and total rubbish. A p-value is a yes/no statistic.
FrankDay said:
Phooey! https://en.wikipedia.org/wiki/P-value
Before the test is performed, a threshold value is chosen, called the significance level of the test, traditionally 5% or 1%...An equivalent interpretation is that p-value is the probability of obtaining the observed sample results,

You cleary do not understand what you are talking about Frank or you are trolling and misdirecting, I'm going with the second two options otherwise you wouldn't have replied. You have applied a CONTINUOUS PROBABILITY to a discrete statistic. To do this with a p-value you would need to have 100 hypotheses and apply a separate limit to each one and it is complete and utter junk.

Your edit is trolling and will be reported as such, as you have left out the fact that this is applicable when THE NULL HYPOTHESIS IS TRUE (i.e. there IS NO SIGNIFICANT DIFFERENCE), it has nothing to do with the interpretation of the data that you are attempting to apply it to. The worst thing is you have quoted the wikipedia page verbatim in your next post, which includes this:

wikipedia said:
The p-value is not the probability that the null hypothesis is true or the probability that the alternative hypothesis is false. It is not connected to either. In fact, frequentist statistics does not and cannot attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero and the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability that would explain the results more easily), Lindley's paradox. There are also a priori probability distributions bin which the posterior probability and the p-value have similar or equal values.[18]

It's stated right there, in the very first sentence of point one of common misunderstandings, you'll notice I've not tried to edit any of it as I don't have to. You have the gall to quote something you clearly do not understand and then suggest others should read it, maybe you should.
the p value is calculated only for the data being analyzed. All it tells us is the probability of any difference being observed due to chance or not. The problem is data that doesn't meet this arbitrary "significance" level gets deemed as totally useless and we can end up with a type 2 error. Studies such as these are not black and white but, rather, nuanced and we should be allowed to determine for ourselves (and debate amongst ourselves) what is significant or not based on the study design.
King Boonen said:
2. No, it does not. Again you attempting to apply a linear scale of significance to a p-value and that cannot be done. All it allows you to do is reject the null hypothesis if it is less than the chosen level of significance, it is not in any way related to "how sure" they can be of the results as it is purely a statistical calculation and may not relate to a real difference anyway. This is a version of the prosecutors fallacy.
FrankDay said:
Again, from the article.
An equivalent interpretation is that p-value is the probability of obtaining the observed sample results,

Again, this is trolling, refer to my previous point. you have taken a section of a webpage, edited it and applied it to something it is not referring to.
It is not trolling. It is a simply a different way of looking at the data and according to the article an "equivalent" way in one section and a better way in another.
King Boonen said:
3. Standard practice when the results do not allow you to reject the null hypothesis is to consider increasing sample size or redesign your experiment but a p-value will give you no indication of whether this will be successful as IT IS NOT A CONTINUOUS STATISTIC. Repeating the exact same experiment is extremely unlikely to give the same p-value.
FrankDay said:
I agree that repeating the exact same study is unlikely to give the exact same result and that the p-value gives no indication, in and of itself, as to whether changing the study design might affect the outcome but if one can, from their education and experience and from the hypothesis they are trying to test discern a reason as to why the study didn't reach the arbitrary significance level then one might be able to redesign the study to see if they are correct or not.

You even state it here, right there in your own sentence. I've even made it bold for you. If the p-value cannot given any indication as to whether an experimental design change will change a result to be significant, IT CANNOT BE RELATED TO THE PERCENTAGE CHANCE OF ACHIEVING A RESULT, by the very definition.
P values only relate to the data being analyzed nothing else. The only way to resolve the issue of inconsistent data is to get more data, either repeat the study or redesign the study to have more power to, hopefully, resolve any issue.
FrankDay said:
Failure of a study to reach the arbitrary significance level is not evidence that the hypothesis is incorrect per se, only that the study as completed did not demonstrate the difference required by the arbitrary choice of the study design. But, if a trend is seen in the data then it is reasonable to look to see if a different design (more subjects, more time, etc) might uncover the "truth". It is why when studies are published they also include the methods and the raw data so others might see errors in the design or interpretation that might lead to better follow on studies. If all we got was "I studied this and found no difference" what does that mean?

It means that based on the statistics used and the limits applied THERE IS NO SIGNIFICANT DIFFERENCE IN THE DATA. You can look at the data and interpet it in any way you want, but you cannot attempt to use the p-values in any other way than a discrete, yes/no statistic. That is what a p-value is. It is pretty much possible to get any answer you want out of a data set by over-fitting your statistics, that's why researchers decide up front what statistics they will use and what their confidence limits will be. You should know this if you had ever actually been involved in any research.
The problem is the SIGNIFICANT DIFFERENCE label is an arbitrary one. Using it in this way to reject data that is close to but not beyond the cut-off leads to a high likelyhood of a type 2 error being made.
FrankDay said:
The most difficult part of doing a study is the interpretation of the data. Simply looking at whether a study reaches the arbitrary statistical significance cut-off level as the only indicator of the studies worth is the lazy way out.

No it is not. It is applying the statistics chosen at the start of the experiment to test the hypothesis put forward. There are methods you can use if you want a continuous intepretation of your data, I even pointed that out in my reply to you (Bayesian Statistics) but you have completely ignored that as I'm guessing you do not even know what it is. You are attempting to cherry pick results based on a complete misunderstanding (which I believe is purposeful) of how the statistics applied can be intepreted. I see this constantly in poorly wrriten papers and reports that get rejected. The p-value is used because it is very simple to calculate, but it is one of the most misundertood and misintepreted statistics used because of this. You cannot intepret it in the way you are attempting to do.
Sure I can because it is a reasonable use of the statistic as stated in the Wikipedia article. All I am saying is if data shows a difference all the p value does is tell the reader what the probability is the observed difference is due to chance (randomness). This, of course, requires the reader to do some thinking about the study and how the data was arrived at to make a determination as to importance rather than this totally arbitrary "statistical significance" cut-off which makes the decision for the lazy reader (many of which hang out here).
 
Jun 18, 2015
171
2
8,835
Re: New study shows leg flexion less efficient than extensio

Out of curiosity, I have two questions for Frank Day:
1. Did you think of independent crank cycling on your own or did you happen to see the erg that Jeff Broker built at the OTC back in the early 90s?
2. How many pairs of your cranks have you sold over the years? I see a rider now and then around here with your cranks so you must sell a fair few.
You may not believe it, but I respect the fact that you have gone into production and sell a product (regardless of what I think of the product itself). I have a company and have been absolutely astonished at how hard launching a manufacturing and sales business can be.
 
Sep 23, 2010
3,596
1
0
Re: Re:

Alex Simmons/RST said:
FrankDay said:
Alex Simmons/RST said:
Like I said before, it's just more statistical comedy.

Frank, you can't (legitimately) use p values in the manner you have. It's no more complex than that.

Doing so demonstrates that you either don't understand this, or if you do, that you are deliberately attempting to mislead others.
Well, according to the article I linked it is preferrable to tell people what the P value is and let them draw their own conclusion as to the worth of the data than to draw some arbitrary boundary that is crossed or not to define significance. You are the folks that are misusing this data trying to imply these studies mean more than they do.
I agree that providing the p value is a good idea as well as the data so people can draw their own conclusions.

But what you can't do is suggest the p value tells us what you say it does. You are either making a fundamental mistake in interpreting the p value, or you are are being deliberately misleading. Which is it Frank?

Here's a neat way of showing this common mistake:
http://www.graphpad.com/guides/prism/6/statistics/index.htm?interpreting_a_large_p_value_from_an_unpaired_t_test.htm
What do you think I am saying? All the p value does is say what the probability of the data being looked at is due to chance. Isn't that correct? In the "new" study I posted there was a power improvement difference between the two groups with a p=.125. Why don't you tell everyone what that means to you (other than it didn't reach the statistically significant cut-off of 0.05). The one thing that site you linked to keeps harping on is data needs to be interpreted in context. That is the problem with the "statistical significance" cut-off, it allows the reader to be lazy and ignore the context.
 
Sep 23, 2010
3,596
1
0
Re: New study shows leg flexion less efficient than extensio

PhitBoy said:
Out of curiosity, I have two questions for Frank Day:
1. Did you think of independent crank cycling on your own or did you happen to see the erg that Jeff Broker built at the OTC back in the early 90s?
2. How many pairs of your cranks have you sold over the years? I see a rider now and then around here with your cranks so you must sell a fair few.
You may not believe it, but I respect the fact that you have gone into production and sell a product (regardless of what I think of the product itself). I have a company and have been absolutely astonished at how hard launching a manufacturing and sales business can be.
It doesn't matter for the purpose of this discussion. Why don't you tell me what your experience with the product (or concept) is and where you developed your bias against the concept?
 
Jul 25, 2012
12,967
1,970
25,680
FrankDay said:
the p value is calculated only for the data being analyzed. All it tells us is the probability of any difference being observed due to chance or not. The problem is data that doesn't meet this arbitrary "significance" level gets deemed as totally useless and we can end up with a type 2 error. Studies such as these are not black and white but, rather, nuanced and we should be allowed to determine for ourselves (and debate amongst ourselves) what is significant or not based on the study design.

This is a complete reversal of what your previous posts claimed and makes it clear that you are trolling. It has no reference to what you replied to, that a p-value is a discrete test and cannot be used to assign a probability that the null hypothesis is true or false, so I have removed the quote but it remains upthread. No one stops people intepreting data however they want, a p-value is derived from a statistical test and has nothing to do with the veractiy of the actual hypothesis, it just determines if there is a statistical difference in the analysis based on the pre-defined limit.

If you want to argue the limit then actually do some of your own research and try and get it published, but you will find it very hard to argue that increasing the likely rate of false positives above what is currently accepted is a good thing.



King Boonen said:
2. No, it does not. Again you attempting to apply a linear scale of significance to a p-value and that cannot be done. All it allows you to do is reject the null hypothesis if it is less than the chosen level of significance, it is not in any way related to "how sure" they can be of the results as it is purely a statistical calculation and may not relate to a real difference anyway. This is a version of the prosecutors fallacy.
FrankDay said:
Again, from the article.
An equivalent interpretation is that p-value is the probability of obtaining the observed sample results,

Again, this is trolling, refer to my previous point. you have taken a section of a webpage, edited it and applied it to something it is not referring to.
FrankDay said:
It is not trolling. It is a simply a different way of looking at the data and according to the article an "equivalent" way in one section and a better way in another.

No it is not, it is trolling and misdirection and will be reported as such. It has been pointed out several times that it is not an "equivalent" way to lok at the data and it is written in the article that it is not. It refers to the calculation of the statistic only and has nothing to do with the chances of the hypothesis being true or false. This is literally written in the article you quoted and continue to quote, it could not be clearer. It is trolling, nothing more and nothing less.

King Boonen said:
3. Standard practice when the results do not allow you to reject the null hypothesis is to consider increasing sample size or redesign your experiment but a p-value will give you no indication of whether this will be successful as IT IS NOT A CONTINUOUS STATISTIC. Repeating the exact same experiment is extremely unlikely to give the same p-value.
FrankDay said:
I agree that repeating the exact same study is unlikely to give the exact same result and that the p-value gives no indication, in and of itself, as to whether changing the study design might affect the outcome but if one can, from their education and experience and from the hypothesis they are trying to test discern a reason as to why the study didn't reach the arbitrary significance level then one might be able to redesign the study to see if they are correct or not.

You even state it here, right there in your own sentence. I've even made it bold for you. If the p-value cannot given any indication as to whether an experimental design change will change a result to be significant, IT CANNOT BE RELATED TO THE PERCENTAGE CHANCE OF ACHIEVING A RESULT, by the very definition.
FrankDay said:
P values only relate to the data being analyzed nothing else. The only way to resolve the issue of inconsistent data is to get more data, either repeat the study or redesign the study to have more power to, hopefully, resolve any issue.

This sentence is meaningless, the data is only inconsistent because it doesn't say what you have pre-determined it should say. This is quite possibly the biggest and best example of bias I have ever seen posted on these boards and that includes reading the clinic.

No one is going to argue that repeating a study with larger numbers is a bad thing. Many people will argue that repeating a study only because the data doesn't agree with your pre-determined bias is a bad thing.


FrankDay said:
Failure of a study to reach the arbitrary significance level is not evidence that the hypothesis is incorrect per se, only that the study as completed did not demonstrate the difference required by the arbitrary choice of the study design. But, if a trend is seen in the data then it is reasonable to look to see if a different design (more subjects, more time, etc) might uncover the "truth". It is why when studies are published they also include the methods and the raw data so others might see errors in the design or interpretation that might lead to better follow on studies. If all we got was "I studied this and found no difference" what does that mean?
King Boonen said:
It means that based on the statistics used and the limits applied THERE IS NO SIGNIFICANT DIFFERENCE IN THE DATA. You can look at the data and interpet it in any way you want, but you cannot attempt to use the p-values in any other way than a discrete, yes/no statistic. That is what a p-value is. It is pretty much possible to get any answer you want out of a data set by over-fitting your statistics, that's why researchers decide up front what statistics they will use and what their confidence limits will be. You should know this if you had ever actually been involved in any research.
The problem is the SIGNIFICANT DIFFERENCE label is an arbitrary one. Using it in this way to reject data that is close to but not beyond the cut-off leads to a high likelyhood of a type 2 error being made.

No it doesn't. The value of the p-value cannot be related to the likelihood of accepting or rejecting the null hypothesis, or the error rates α and β. That is exactly what you are trying to do here and it is nonsense. It's right there, in the article you keep referring to:

wikipedia said:
The p-value refers only to a single hypothesis, called the null hypothesis and does not make reference to or allow conclusions about any other hypotheses, such as the alternative hypothesis in Neyman–Pearson statistical hypothesis testing. In that approach,one instead has a decision function between two alternatives, often based on a test statistic, and computes the rate of Type I and type II errors as α and β. However, the p-value of a test statistic cannot be directly compared to these error rates α and β. Instead, it is fed into a decision function.

I've made it bold for you to make it easy.


FrankDay said:
The most difficult part of doing a study is the interpretation of the data. Simply looking at whether a study reaches the arbitrary statistical significance cut-off level as the only indicator of the studies worth is the lazy way out.
King Boonen said:
No it is not. It is applying the statistics chosen at the start of the experiment to test the hypothesis put forward. There are methods you can use if you want a continuous intepretation of your data, I even pointed that out in my reply to you (Bayesian Statistics) but you have completely ignored that as I'm guessing you do not even know what it is. You are attempting to cherry pick results based on a complete misunderstanding (which I believe is purposeful) of how the statistics applied can be intepreted. I see this constantly in poorly wrriten papers and reports that get rejected. The p-value is used because it is very simple to calculate, but it is one of the most misundertood and misintepreted statistics used because of this. You cannot intepret it in the way you are attempting to do.
Sure I can because it is a reasonable use of the statistic as stated in the Wikipedia article. All I am saying is if data shows a difference all the p value does is tell the reader what the probability is the observed difference is due to chance (randomness). This, of course, requires the reader to do some thinking about the study and how the data was arrived at to make a determination as to importance rather than this totally arbitrary "statistical significance" cut-off which makes the decision for the lazy reader (many of which hang out here).

It is not a reasonable use of the statistic, the statistic cannot be used to assign a probability of the null hypothesis being true or false, this has been pointed out several times and is in the article you have quoted. You CANNOT use a p-value in this way, it's written in plain English for you to see yet you insist on mixing between the chance of the statistic being random and the statistic giving an indication about the truth of the hypothesis. It's utter rubbish.

The statistic knows nothing about the hypothesis, it cannot assess the likelihood that the hypothesis is correct, it can only to you if that statistic has occured by chance based on the pre-determined limit.
 
Nov 25, 2010
1,175
68
10,580
Re: Re:

FrankDay said:
...
The one thing that site you linked to keeps harping on is data needs to be interpreted in context. That is the problem with the "statistical significance" cut-off, it allows the reader to be lazy and ignore the context.
---------------------------------
Yes the 'context' of the data is important.

If the p-value indicates that the results are statistically significant (or the degree to which they are not), then it is necessary to investigate the METHODS (i.e. the context) of the testing that yielded those results.
e.g. Was the testing done with good enough controls to eliminate other factors that could affect the results?
Was enough data collected and displayed so that other factors could be detected? And if other factors did seem to be present, were they discussed and analyzed?

Jay Kosta
Endwell NY USA
 
Nov 25, 2010
1,175
68
10,580
Re:

King Boonen said:
...
It is not a reasonable use of the statistic, the statistic cannot be used to assign a probability of the null hypothesis being true or false, this has been pointed out several times and is in the article you have quoted. You CANNOT use a p-value in this way,
...
-------------------------------------------
From the wiki article - https://en.wikipedia.org/wiki/P-value -
An equivalent interpretation is that p-value is the probability of obtaining the observed sample results, or "more extreme" results, when the null hypothesis is actually true (here, "more extreme" is dependent on the way the hypothesis is tested).[2]

I think the article says that
IF the null hypothesis is TRUE.
THEN the p-value is the probability of getting the observed results

similaraly,
If the results are due to chance
then the probability of getting the observed results is the p-value.

Jay Kosta
Endwell NY USA
 
Jun 18, 2015
171
2
8,835
Re: New study shows leg flexion less efficient than extensio

FrankDay said:
It doesn't matter for the purpose of this discussion. Why don't you tell me what your experience with the product (or concept) is and where you developed your bias against the concept?

Actually, I think it does matter. If it is truly your BrainChild, I can understand how you would be so devoted to the idea. According to Hoovers your company has $290k in sales. Congrats on that.

You may find this hard to believe but I am not biased. I am simply acknowledging what the data tell me as I have laid out in previous posts. To me, and I think to anyone with scientific training in biomechanics and/or neuromuscular function, the main message from research in this area is that the use of non-counter-weighted single leg or uncoupled crank training is not beneficial. It may in fact be negative. If the data were different I would change my position accordingly and would train on your cranks (or make a pair like this guy did http://www.instructables.com/id/Powercranks-for-less-than-600/).
 
Sep 23, 2010
3,596
1
0
Re: Re:

JayKosta said:
FrankDay said:
...
The one thing that site you linked to keeps harping on is data needs to be interpreted in context. That is the problem with the "statistical significance" cut-off, it allows the reader to be lazy and ignore the context.
---------------------------------
Yes the 'context' of the data is important.

If the p-value indicates that the results are statistically significant (or the degree to which they are not), then it is necessary to investigate the METHODS (i.e. the context) of the testing that yielded those results.
e.g. Was the testing done with good enough controls to eliminate other factors that could affect the results?
Was enough data collected and displayed so that other factors could be detected? And if other factors did seem to be present, were they discussed and analyzed?

Jay Kosta
Endwell NY USA
And what does it mean that the null hypothesis is rejected? When rejected or not rejected it is rejected or not rejected for this specific set of data under these specific circumstances in that the difference seen did or didn't meet the arbitrary standard selected to test the null hypothesis. It does not mean that for these "uncoupled cranks studies" that failure to meet this standard "proves" that uncoupled cranks don't work. Fergie has come here and said that very thing many times and these supposed statistics gurus have never once corrected him on this fundamental error yet they are willing to rake me over the coals for missing some nuance deep in the statistical jargon. Are we ever going to be able to have a reasonable and balanced discussion regarding this stuff?
 
Sep 23, 2010
3,596
1
0
Re: New study shows leg flexion less efficient than extensio

PhitBoy said:
FrankDay said:
It doesn't matter for the purpose of this discussion. Why don't you tell me what your experience with the product (or concept) is and where you developed your bias against the concept?

Actually, I think it does matter. If it is truly your BrainChild, I can understand how you would be so devoted to the idea. According to Hoovers your company has $290k in sales. Congrats on that.

You may find this hard to believe but I am not biased. I am simply acknowledging what the data tell me as I have laid out in previous posts. To me, and I think to anyone with scientific training in biomechanics and/or neuromuscular function, the main message from research in this area is that the use of non-counter-weighted single leg or uncoupled crank training is not beneficial. It may in fact be negative. If the data were different I would change my position accordingly and would train on your cranks (or make a pair like this guy did http://www.instructables.com/id/Powercranks-for-less-than-600/).
It wouldn't matter if I came up with the idea or not. I have a lot of experience observing what the concept does and doesn't do. Trying to explain what I observe (isn't that what a good scientist does?) has led me to my beliefs. When I came up with the idea (and I did come up with the idea independently) I had no clue that it would be so powerful, I thought we might see 10% improvements. What we generally see usually far exceeds that. It is these huge improvements that make the concept so "unbelievable" to those who have never experienced it. Trying to explain what I observe is what has led me to my ideas. And, your observation of the data is just that. It is awful data. It takes a long time to see these big benefits. Studies on the cranks last 5-6 weeks. Most people are just getting comfortable with the cranks by this time. And, your concept of what the cranks do is completely wrong. Uncoupled cranks is nothing like one-legged pedaling (because there is a counterweight on the other side making it unnecessary to unweight with additional force) or like counterweighted one-legged pedaling (because the other side is unweighting so it is unnecessary to push down harder because of the counterweight on the other side. The OP study has almost zero relevance to uncoupled pedaling but to those who have no experience with the product but think they know what they do keep trying to put out that it does. They refuse to listen to the person who invented the product, thinking I am trying to misrepresent it. LOL. Basically every study that has looked at the product has shown a difference between the product and the control. The problem is few have reched the "statistical significance" standard. That is because of the study design, the improvement is too small in 5-6 weeks generally and the numbers to small to reach the standard. Look at the raw data and context of these studies.

There actually was a study that where the PC group did reach statistical significance compared to control and to itself. The problem with it though is the author pretty much ignored this finding. Look at Burns Masters thesis http://ro.ecu.edu.au/cgi/viewcontent.cgi?article=1017&context=theses Check out paragraph 5.2
This resulted in a significant interaction between the groups over time (Mean ± SD values can be seen in Figure 7).
and figure 7. If you will note in figure 7 there is a big difference in the PC group between riding on PC's and riding on normal cranks. The difference narrows somewhat in the 5 weeks of the study but the goal of the PC training is to get the rider to the point where they will look the same whether on PC's or regular cranks. This one graphic, when understood, explains why 5-6 weeks is an inadequate time to evaluate this product.

Anyhow, you are biased because you are making judgments with inadequate knowledge of the subject.
 
Jul 25, 2012
12,967
1,970
25,680
Re: Re:

JayKosta said:
King Boonen said:
...
It is not a reasonable use of the statistic, the statistic cannot be used to assign a probability of the null hypothesis being true or false, this has been pointed out several times and is in the article you have quoted. You CANNOT use a p-value in this way,
...
-------------------------------------------
From the wiki article - https://en.wikipedia.org/wiki/P-value -
An equivalent interpretation is that p-value is the probability of obtaining the observed sample results, or "more extreme" results, when the null hypothesis is actually true (here, "more extreme" is dependent on the way the hypothesis is tested).[2]

I think the article says that
IF the null hypothesis is TRUE.
THEN the p-value is the probability of getting the observed results

similaraly,
If the results are due to chance
then the probability of getting the observed results is the p-value.

Jay Kosta
Endwell NY USA

This is true Jay, notice it only refers to the result under a certain condition, it does not refer to the likelihood of a hypothesis being true or false, but it is not what Frank is trying to do. It is this rubbish that started this conversation:

FrankDay said:
Many of you will note that they show "no difference". Of course, there are differences but they just don't reach the P<.05 level of significance.
For instance: Gross efficiency in the PC group improved from 19.7 to 20.9 (a 6% improvement) while the control group improved from 19.8 to 20.3 (a 2.5% improvement). This difference only reached the 0.25 level of significance. So, there is a 1 in 4 chance this difference is due to chance or a 3 in 4 chance (75%) the differences are real.

Then, time-trial power. The PC group improved from 284 to 298 watts (5%) while the control group improved from 274 to 281 watts (2.5%). This difference only reached the 0.125 level of significance. So, there is a 1 in 8 chance this difference is due to chance or a 7 in 8 chance (87.5%) the differences are real.

He was attempting to use the p-value to assign a probability that the hypothesis is true. This is categorically wrong. This statistic cannot attach probabilities to hypotheses. This is clearly stated in the article:

wikipedia said:
The p-value is not the probability that the null hypothesis is true or the probability that the alternative hypothesis is false. It is not connected to either. In fact, frequentist statistics does not and cannot attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero and the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability that would explain the results more easily), Lindley's paradox. There are also a priori probability distributions bin which the posterior probability and the p-value have similar or equal values.[18]
 
Sep 23, 2010
3,596
1
0
Re: Re:

King Boonen said:
JayKosta said:
King Boonen said:
...
It is not a reasonable use of the statistic, the statistic cannot be used to assign a probability of the null hypothesis being true or false, this has been pointed out several times and is in the article you have quoted. You CANNOT use a p-value in this way,
...
-------------------------------------------
From the wiki article - https://en.wikipedia.org/wiki/P-value -
An equivalent interpretation is that p-value is the probability of obtaining the observed sample results, or "more extreme" results, when the null hypothesis is actually true (here, "more extreme" is dependent on the way the hypothesis is tested).[2]

I think the article says that
IF the null hypothesis is TRUE.
THEN the p-value is the probability of getting the observed results

similaraly,
If the results are due to chance
then the probability of getting the observed results is the p-value.

Jay Kosta
Endwell NY USA

This is true Jay, notice it only refers to the result under a certain condition, it does not refer to the likelihood of a hypothesis being true or false, but it is not what Frank is trying to do. It is this rubbish that started this conversation:

FrankDay said:
Many of you will note that they show "no difference". Of course, there are differences but they just don't reach the P<.05 level of significance.
For instance: Gross efficiency in the PC group improved from 19.7 to 20.9 (a 6% improvement) while the control group improved from 19.8 to 20.3 (a 2.5% improvement). This difference only reached the 0.25 level of significance. So, there is a 1 in 4 chance this difference is due to chance or a 3 in 4 chance (75%) the differences are real.

Then, time-trial power. The PC group improved from 284 to 298 watts (5%) while the control group improved from 274 to 281 watts (2.5%). This difference only reached the 0.125 level of significance. So, there is a 1 in 8 chance this difference is due to chance or a 7 in 8 chance (87.5%) the differences are real.

He was attempting to use the p-value to assign a probability that the hypothesis is true. This is categorically wrong.
Where on earth do you see me saying that???!!! Where do I mention null hypothesis? What I say is that the p sets the probability that this data is true. Nothing more, nothing less.
 
Jul 25, 2012
12,967
1,970
25,680
Re: Re:

FrankDay said:
King Boonen said:
JayKosta said:
King Boonen said:
...
It is not a reasonable use of the statistic, the statistic cannot be used to assign a probability of the null hypothesis being true or false, this has been pointed out several times and is in the article you have quoted. You CANNOT use a p-value in this way,
...
-------------------------------------------
From the wiki article - https://en.wikipedia.org/wiki/P-value -
An equivalent interpretation is that p-value is the probability of obtaining the observed sample results, or "more extreme" results, when the null hypothesis is actually true (here, "more extreme" is dependent on the way the hypothesis is tested).[2]

I think the article says that
IF the null hypothesis is TRUE.
THEN the p-value is the probability of getting the observed results

similaraly,
If the results are due to chance
then the probability of getting the observed results is the p-value.

Jay Kosta
Endwell NY USA

This is true Jay, notice it only refers to the result under a certain condition, it does not refer to the likelihood of a hypothesis being true or false, but it is not what Frank is trying to do. It is this rubbish that started this conversation:

FrankDay said:
Many of you will note that they show "no difference". Of course, there are differences but they just don't reach the P<.05 level of significance.
For instance: Gross efficiency in the PC group improved from 19.7 to 20.9 (a 6% improvement) while the control group improved from 19.8 to 20.3 (a 2.5% improvement). This difference only reached the 0.25 level of significance. So, there is a 1 in 4 chance this difference is due to chance or a 3 in 4 chance (75%) the differences are real.

Then, time-trial power. The PC group improved from 284 to 298 watts (5%) while the control group improved from 274 to 281 watts (2.5%). This difference only reached the 0.125 level of significance. So, there is a 1 in 8 chance this difference is due to chance or a 7 in 8 chance (87.5%) the differences are real.

He was attempting to use the p-value to assign a probability that the hypothesis is true. This is categorically wrong.
Where on earth do you see me saying that???!!! Where do I mention null hypothesis? What I say is that the p sets the probability that this data is true. Nothing more, nothing less.

I didn't say you mentioned the null hypothesis but you don't have to as you are attempting to assign a percentage to the chance of differences being real based on a p-value.
You keep making this false statement that I have bolded, it does nothing of the kind and this has been pointed out to you over and over. Yet more trolling.
 
Nov 25, 2010
1,175
68
10,580
King Boonen said:
...
This is true Jay, notice it only refers to the result under a certain condition, it does not refer to the likelihood of a hypothesis being true or false,
...
He was attempting to use the p-value to assign a probability that the hypothesis is true. This is categorically wrong. This statistic cannot attach probabilities to hypotheses. This is clearly stated in the article:
--------------------------------------------------
I agree, it's a complex and subtle difference (well to me anyhow...) that the
p-value indicates the likelihood of the TEST DATA being obtained WHEN the null hypothesis is TRUE.
And NOT that the p-value (from the test data) indicates the likeliness of the null hypothesis BEING true.

Jay Kosta
Endwell NY USA
 
Apr 21, 2009
3,095
0
13,480
And the Performance Artist nicely distracts people from the OP study that showed a cyclist pedalling more like a Gimmickcranker for seven years is more efficient when he changes to a system that allows him to pedal more like a masher. Lets not lose sight of that!

As a coach, this type of research is Gold as it shows me that what I am teaching my riders and most importantly what I am not wasting their time on is good evidence based data.
 
Nov 25, 2010
1,175
68
10,580
Re:

CoachFergie said:
...
is more efficient when he changes to a system that allows him to pedal more like a masher.
...
----
Yes, the counter-weight 'could' allow him to pedal more like a masher, and it just as well 'could' allow him to pedal more like 'circular pedaling', or more like un-coupled crank pedaling (which doesn't require strong positive torque on the upstroke).

What do you see in the OP abstract/article that suggests that his style with counter-weight was 'mashing'?

Jay Kosta
Endwell NY USA