Agreement to not test LA on Ironman?

Page 8 - Get up to date with the latest news, scores & standings from the Cycling News Community.
Sep 5, 2009
1,239
0
0
acoggan said:
Actually, that is a very interesting spin on things: anyone who has knocked around the scientific world at all knows that it would be up to the journal editors to make any decisions re. a retraction, not the original authors (or in this case, author). IOW, only someone really inexperienced would approach another scientist and ask them (no matter how nicely) to retract a paper (and only someone really naive would ask Ed Coyle that! :eek:). I'm therefore guessing things didn't go down quite as portrayed in that interview...that is, either Ashenden et al. approached the journal first (or everything was run through the journal), or they first approached Coyle much more mildly, and things only went downhill after that, or something.

I understand the process is as follows as presented by Science in Sport who were one of many who were highly critical of the study:

...when a research study is sent off for review, it is usually sent by the editor of the journal to a few reviewers - experts in the field, who cast their eye over it to make sure it is well-controlled and worthy of publication. The Coyle study of Armstrong was submitted on February 22nd. It was accepted three weeks later. Of course, short turnover times are not unique, but this is remarkable. Remember, this is a study so criticized that it inspired two SEPARATE letters in response, and trust us, letters like these are not all that common in the publication process. It became a case study here at UCT in how not to write a case report, and as mentioned in our first post, was criticized widely at conferences.

It appears JAP did not send off the study to reviewers before publication. (JAP receive a donation from Livestrong? :)) Then the critical letters flowed back to JAP on March 17:

The first response and criticism was swift, and came from two sources. First, David Martin and colleagues from Australia wrote a letter titled "Has Armstrong's cycle efficiency improved?". This was accompanied by a letter from Yorck Olaf Schumacher and his colleagues titled "Scientific considerations for physiological evaluations of elite athletes".

Essentially, these letters criticized the study design, the method and the scientific process followed, including the conclusions.
 
Sep 5, 2009
1,239
0
0
AngusW said:
The timings seem a bit strange. LA Confidential was published in 2004. Ed Coyle testifies for LA in November 2005. When exactly did he have the conversation with Greg Lemond in which Lemond enlightened Coyle about Michele Ferrari? If it was prior to the SCA hearing, why did Coyle testify for LA, being sickened as he was by the news that LA was with Ferrari?

Coyle was a paid witness.
 
Mar 18, 2009
2,553
0
0
Velodude said:
Speculations. I do not believe the public was privy to the complaint by peer scientists to the University for scientific misconduct by Dr. Edward Coyle.

Under the law, such information cannot be disclosed.

Velodude said:
We do know the complaint was dismissed.

Yes, although I do recall one of the adminstrators at UT-Austin being quoted as publically admonishing Coyle.

Velodude said:
However, we do know the background:

1. There was no intention to undertake a study on Armstrong 1992-1999. Armstrong was opportunistic in turning up at the lab for testing unrelated to any study. Hence the inconsistency of tests, data and times within a cycling season.

Absolutely correct, but 1) no one has ever claimed otherwise, and 2) just because the data were a sample of convenience doesn't automatically mean they shouldn't have been published (obviously, the original reviewers thought they were worthy of publication).

Velodude said:
2. 5 years after these testing results lay undisturbed in a filing cabinet, Armstrong had public image problems relating to doping commencing with the publication of LA Confidentiel - Les secrets de Lance Armstrong by Walsh and Ballester in 2004.

Hold that thought...

Velodude said:
3. In February 2005 Dr Edward Coyle cobbled together the loose data 1992-1999 into a "scientific" study to justify Armstrong's performance development.

Not true: Ed pulled the data together in order to present it at a small scientific meeting held here in St. Louis in the spring of 2002.

Velodude said:
The study was approved by the university in a lightning 3 weeks without being presented for peer and editorial review!

Again, incorrect, or at the very least, confused. While the university's Institutional Review Board would have had to approve even a retrospective data review such as this one, it wouldn't be unusual (back then, anyway...things have gotten much more onerous in the last decade) for that to take only a few weeks, and in any case, wouldn't be considered peer or editorial review. So, I can only assume that you are referring to the submitted/accepted dates of the actual paper...

Velodude said:
4. The study was controversial and came in for widespread scientific criticism.

Definitely.

Velodude said:
5. Coyle admitted to a calculation error but continued to personally promote the study through the media.

First, you need to go re-read Coyle's letter-to-the-editor: he did not admit to making any calculation errors, only to citing the wrong paper for the (equally acceptable) method that he actual did use to calculate delta efficiency.

Second, by "promoting the paper in the media" (responding to requests by the media for comments), he was promoting himself as much, if not more, than Armstrong. So, I see nothing nefarious in this (I know if the media were incessantly calling me about one of my papers, I'd accomodate their requests.)

Velodude said:
6. Armstrong used Coyle as a paid witness at the SCA deposition hearing in November 2005 to use the study as justification against his doping.

Just as Ashenden was paid by SCA to provide testimony that Armstrong was indeed guilty of doping. IOW, I don't see how you can argue that Coyle didn't offer his honest expert opinion while in the same breath arguing that Ashenden did offer his.
 
Mar 18, 2009
2,553
0
0
Velodude said:
I understand the process is as follows as presented by Science in Sport who were one of many who were highly critical of the study:

It appears JAP did not send off the study to reviewers before publication. (JAP receive a donation from Livestrong? :)) Then the critical letters flowed back to JAP on March 17:

As a member of the Editorial Board of JAP (although not back then), I can tell you that the Science of Sport guys are simply incorrect: there is nothing really remarkable about a study going from submission to acceptance in just 3 wk. All it would require is 1) reviewers who jumped right on their assigned task, and who had at most minimal comments for the author(s), and 2) an editor who took their responsibilities seriously.

In any case, though, there is a big difference between saying that a paper that shouldn't have been published somehow slipped through the peer-review process, and saying that it was commissioned by Armstrong to provide a smoke-screen (just as there is a big difference between saying a paper shouldn't have been published in the first place, and saying that it should be retracted...the bar is FAR higher for the latter than the former).
 
Jan 27, 2010
921
0
0
biokemguy said:
That's not how I've seen it work, but maybe your field is different than mine.

Journal editors don't have time to keep track of material that has already passed through the peer review process to see if it should be retracted.

That's up to the authors and their colleagues in the same field.

In my field, your colleagues would question you at meetings, ask you further stern questions after your flawed presentation, send you response letters in journals...and if you don't get the hint ...phone calls are made and busy editors find out very quickly. If formal Critical appraisal was applied to Dr. C's JAP article it would be shredded.

NW
 
Sep 5, 2009
1,239
0
0
acoggan said:
<snip>... Not true: Ed pulled the data together in order to present it at a small scientific meeting held here in St. Louis in the spring of 2002... <snip>

Can you provide a link concerning this small scientific meeting.

Was he likewise ostracized by that small scientific fraternity for his study?
 
Jan 27, 2010
921
0
0
acoggan said:
As a member of the Editorial Board of JAP (although not back then), I can tell you that the Science of Sport guys are simply incorrect: there is nothing really remarkable about a study going from submission to acceptance in just 3 wk. All it would require is 1) reviewers who jumped right on their assigned task, and who had at most minimal comments for the author(s), and 2) an editor who took their responsibilities seriously.
.

Sir,

You're joking right? I'm referring to points 1) and 2).

Could you please generate some data from your former experience with JAP that shows the average temporal 'turnaround' for a said submission...to include recognition of submission, allocation to reviewers, reviewing process, reply to primary author with suggestions and corrections, secondary submission with corrections, re-reviewing process, acceptance and then publication.

3 weeks, really?
 
Mar 18, 2009
2,553
0
0
Velodude said:
Can you provide a link concerning this small scientific meeting.

No, as it was a closed meeting of John Holloszy's former post-docs, not something open to "outsiders" or ever slated for publication. That said, I know what happened, because I was there. :D

Velodude said:
Was he likewise ostracized by that small scientific fraternity for his study?

Not in the least...everybody saw it for what it was (i.e., a rather weak retrospective study that was nonetheless interesting due to the subject, i.e., Armstrong), and didn't really think much of it (at least that was my opinion and that of a few others I discussed the poster with). It was only after the paper was published in JAP that professional jealousy raised its ugly head (seriously: Dave Martin has told me that one of the reason he took such umbrage at Coyle's paper is that he - Dave - felt that Coyle was playing on "his turf").
 
Mar 18, 2009
2,553
0
0
Neworld said:
Sir,

You're joking right? I'm referring to points 1) and 2).

Could you please generate some data from your former experience with JAP that shows the average temporal 'turnaround' for a said submission...to include recognition of submission, allocation to reviewers, reviewing process, reply to primary author with suggestions and corrections, secondary submission with corrections, re-reviewing process, acceptance and then publication.

3 weeks, really?

You could generate such statistics yourself simply by looking at the submitted/acceptance dates in a few issues of the journal. However, that would really only tell you the average turn-around time, not the minimum turn-around time for a paper that (presumably) was accepted with few, if any, revisions.

In any case, what I can tell without doing such research is that 1) reviewers are given <2 wk to respond w/ comments, and 2) no paper would ever be accepted by JAP w/o being sent out for peer-review.
 
Mar 18, 2009
2,553
0
0
acoggan said:
You could generate such statistics yourself simply by looking at the submitted/acceptance dates in a few issues of the journal. However, that would really only tell you the average turn-around time, not the minimum turn-around time for a paper that (presumably) was accepted with few, if any, revisions.

So just for fun I thought I'd look at a few issues from the mid-2000s to get an idea of turn-around times...lo and behold, here's the 2nd paper I checked:

http://jap.physiology.org/content/98/1/40.full

(Submitted 26 August 2004 ; accepted in final form 16 September 2004, so only ~3 wk)

EDIT 1: And in the same issue (i.e., January 2005):

http://jap.physiology.org/content/98/1/100.full

(Submitted 2 July 2004 ; accepted in final form 8 August 2004, so ~5 wk)

EDIT 2: Another one from the same issue:

http://jap.physiology.org/content/98/1/160.full

(Submitted 26 June 2003 ; accepted in final form 8 August 2004, so ~6 wk)

EDIT 3: A third one:

http://jap.physiology.org/content/98/1/221.full

(Submitted 22 July 2004 ; accepted in final form 26 August 2004, so ~5 wk)

EDIT 4: A fourth one, also from the same issue:

http://jap.physiology.org/content/98/1/250.full

(Submitted 18 August 2004 ; accepted in final form 14 September 2004, so ~4 wk).

EDIT 5: A fifth one, also from the same issue:

http://jap.physiology.org/content/98/1/371.full

(Submitted 13 September 2004 ; accepted in final form 22 September 2004, so turned around in only 9 days!)

"Remarkable" my ***....
 
Aug 13, 2009
12,854
2
0
AngusW said:
The timings seem a bit strange. LA Confidential was published in 2004. Ed Coyle testifies for LA in November 2005. When exactly did he have the conversation with Greg Lemond in which Lemond enlightened Coyle about Michele Ferrari? If it was prior to the SCA hearing, why did Coyle testify for LA, being sickened as he was by the news that LA was with Ferrari?

The conversation took place in 2001

Media exposure and paid expert witness fees are a great way to settle the stomach.
 
Aug 13, 2009
12,854
2
0
acoggan said:

The thread is about Armstrong not getting tested at his latest paid appearance...err race.

If you insist it becoming a defense of your discredited buddy perhaps you could start another thread.....so I can ignore it.
 
Jan 27, 2010
921
0
0
acoggan said:
So just for fun I thought I'd look at a few issues from the mid-2000s to get an idea of turn-around times...lo and behold, here's the 2nd paper I checked:

http://jap.physiology.org/content/98/1/40.full

(Submitted 26 August 2004 ; accepted in final form 16 September 2004, so only ~3 wk)

EDIT 1: And in the same issue (i.e., January 2005):

http://jap.physiology.org/content/98/1/100.full

(Submitted 2 July 2004 ; accepted in final form 8 August 2004, so ~5 wk)

EDIT 2: Another one from the same issue:

http://jap.physiology.org/content/98/1/160.full

(Submitted 26 June 2003 ; accepted in final form 8 August 2004, so ~6 wk)

1. Small sample size, but at least you tried.

2. What about all the papers rejected, don't forget to include those...

3. Yes, I would image most readers here would like to the the outliers (min and Max turnaround) and the average.

4. Retrospective studies? Ahh...you'd agree those trials are NOT very useful, especially with limited subjects and discordant equipment. Maybe it should have been a case report; not. Yikes, JAP must have been thin on available work back then.
 
Mar 18, 2009
2,553
0
0
Neworld said:
In my field, your colleagues would question you at meetings, ask you further stern questions after your flawed presentation, send you response letters in journals...and if you don't get the hint ...phone calls are made and busy editors find out very quickly.

Well there you go: your field is no different, i.e., once a paper has been published the first "line of attack" is via the journal.
 
Mar 18, 2009
2,553
0
0
Neworld said:
1. Small sample size, but at least you tried.

As I said, you're welcome to do the research yourself. Clearly, though, you don't have to look very long/hard to find plenty of papers accepted w/in just a handful of weeks (i.e., Coyle's paper is not in the least remarkable in this regard.

Neworld said:
2. What about all the papers rejected, don't forget to include those...

How quickly papers are rejected has little bearing on how quickly they are accepted.

Neworld said:
3. Yes, I would image most readers here would like to the the outliers (min and Max turnaround) and the average.

Again, you're welcome to do the research yourself.

Neworld said:
4. Retrospective studies? Ahh...you'd agree those trials are NOT very useful, especially with limited subjects and discordant equipment. Maybe it should have been a case report; not. Yikes, JAP must have been thin on available work back then.

Oh, I'd definitely agree that a retrospective study is weaker than a prospective one, just as I'd agree that a case report is weaker than a study with a larger sample size. OTOH, the data set in question were obtained from an individual with unique palmares, which makes them more interesting. These are the issues the reviewers had to weigh, and they decided that the paper warranted publication. As I have said before, I don't know what I would have recommended, but I do know that, flawed though it might be, Coyle's study has clearly moved the field forward. Indeed, if were not for the publication of his paper we might still be laboring under Ashenden's misconception that efficiency was some sort of immutable "holy grail" (his words, not mine).
 
Jan 27, 2010
921
0
0
Quote:
Originally Posted by Neworld
In my field, your colleagues would question you at meetings, ask you further stern questions after your flawed presentation, send you response letters in journals...and if you don't get the hint ...phone calls are made and busy editors find out very quickly.


Well there you go: your field is no different, i.e., once a paper has been published the first "line of attack" is via the journal.

Not necessarily. Some authors with weak, sham, poorly controlled or falsified studies publish first, then carry-on presenting those studies. They are first confronted at meetings...see above.
 
Jan 27, 2010
921
0
0
acoggan said:
How quickly papers are rejected has little bearing on how quickly they are accepted.

I disagree. As you are aware there are a finite number of reviewers, and the incoming batch of submissions are 'supposedly' all evaluated equally, sequentially etc... accepting some, correcting some and rejecting some. So the number or incoming submissions is important to the 'turnaround' time.


Oh, I'd definitely agree that a retrospective study is weaker than a prospective one, just as I'd agree that a case report is weaker than a study with a larger sample size.

agreed.

OTOH, the data set in question were obtained from an individual with unique palmares, which makes them more interesting. These are the issues the reviewers had to weigh, and they decided that the paper warranted publication.

This is where the decision making process and possibly the ethics of that Journal come into question...as evaluating a solitary 'unique' individual in a retrospective, poorly controlled study is, well, very self-limiting and pointless as the results are likely impossible to dissect. The fact that the 'unique' person, and athletic prowess, in question was, and is, in serious doubt cloaking the study in cloud it is unfortunate that the Journal would not have implored the original author to return to the topic with a more structured, prospective, controlled investigation.
 
Mar 20, 2009
406
0
0
the comments here are just getting more and more stupid by the number.. and by the same people no less!
 
danjo007 said:
the comments here are just getting more and more stupid by the number.. and by the same people no less!


Not sure stupid is the word I would use. Certainly we seem to have strayed away from the original topic...

I can't imagine you guys have got a whole lot more to say about the process of academic peer review but maybe you have :rolleyes:

Is there anyone still interested in discussing the original topic? Speak now and I will clean up the thread otherwise I am sure it will just fade away over the next day or so.

Terry
Admin
 

Polish

BANNED
Mar 11, 2009
3,853
1
0
Race Radio said:
Looks like Lance is reading the forum again

Don't mean to be a party pooper, but I would think that Lance is hanging out in the Tri-Forums these days. Not here in the clinic.

Shame really. Fun while it lasted. Good times, good times.

Anyway, the Tri-Forums will now feel the benefits of the Lance Effect.
I see multiple 10,000 post mega threads over there for sure.
Any of you guys visiting the Tri-Forums now?