elapid said:
To make a comparison, the definition of evidence-based medicine is "the use of mathematical estimates of the risk of benefit and harm, derived from high-quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients."
Thank you for your considered response related to the original question.
Evidence based MEDICINE is all about improving outcome, wouldn't you say? It involves more than just evidence of effectiveness as all drugs to be certified by the FDA have to show they are both safe and effective. Safe is a relative determination (no drug is completely safe) and effective is usually compared to placebo. Evidence based medicine is trying to help doctors go beyond their own limited experience to help them make better choices between many competing alternative treatments. It does this by taking a good look at the current literature critically. In some instances it may only make the determination that a certain treatment is more likely to be beneficial than do harm. From
this article
The systematic review of published research studies is a major method used for evaluating particular treatments. The Cochrane Collaboration is one of the best-known, respected examples of systematic reviews. Like other collections of systematic reviews, it requires authors to provide a detailed and repeatable plan of their literature search and evaluations of the evidence.[24] Once all the best evidence is assessed, treatment is categorized as (1) likely to be beneficial, (2) likely to be harmful, or (3) evidence did not support either benefit or harm.
Sam Leahey's website details the level of scientific evidence that is considered "high-quality research" (i.e., meta-analyses = level 1; randomized and controlled studies = level 2; and non-randomized and controlled studies = level 3; etc.). Most of the published studies on cycling performance are level 2 or 3.
Based on the long-held definition of evidence-based medicine, which goes back decades, evidence-based coaching should be coaching based on high-level scientific research. Anything else, such as proposed by Sam Leahey, is a load of rubbish.
Well, I would like to think that Sam Leahey's approach is what it is because of the paucity of good research directly related to coaching and outcome. Therefore, he feels it is necessary to invoke lesser forms of evidence to most effectively use the method to optimize coaching outcome in the real world of coaching cyclists and triathletes (or anyone, for that matter).
Can you give me an example of some "high quality research" that directly relates to coaching ironman triathletes that if not followed would mean than a coach is not doing evidence based coaching?
Can you give me an example of some "high quality research" that compares two different coaching techniques, both meant to achieve a similar goal, that show a substantial difference in outcome?
Can you give me an example of any systematic review of the literature related to any coaching topic that shows one approach superior to another approach?
Evidence based medicine has a
cochrane collaboration to help evaluate all the evidence and place unbiased stamps on the quality of the current best evidence. Does such a similar organization exist for evidence based coaching? If not, how does the coach eliminate his own bias from the evaluation process?
A 2007 analysis of 1016 systematic reviews from all 50 Cochrane Collaboration Review Groups found that 44% of the reviews concluded that the intervention was likely to be beneficial, 7% concluded that the intervention was likely to be harmful, and 49% concluded that evidence did not support either benefit or harm. 96% recommended further research.
You can be assured that each and everyone of those interventions had physician advocates who believe them to be beneficial, even those deemed to be harmful by the analysis.
What would a Cochrane-like analysis say for some of the interventions involved in cycling coaching? Going back to what such an analysis is supposed to do ((1) likely to be beneficial to outcome, (2) likely to be harmful to outcome, or (3) evidence did not support either benefit or harm.)
Strength training? Most likely the evidence suggests 3 and one would conclude more evidence is necessary.
Power meters? Most likely the evidence suggests possibly 1 but, more likely 3 and one would conclude more evidence is necessary.
PowerCranks. Most likely the evidence suggests possibly 1 but, more likely 3 (there isn't a single study suggesting worse outcome from using them) and that one would rationally conclude that more evidence is necessary.
In my opinion, few (if any) coaches use evidence based coaching as it is supposed to be used (the medical model). Instead, what most of them do is find some evidence in the literature that supports their bias and then claim they use evidence based coaching because they have some evidence to support what they want to do. If you have some evidence to the contrary I would love to see it. The lack of positive evidence to support an approach (in the absence of evidence the approach is detrimental) is not evidence against an approach. To assert otherwise is not evidence based coaching.
Evidence based coaches, imho, should be neutral regarding techniques where no clear positive or negative benefits can be demonstrated in the scientific literature. Of course, they will have their own bias based on lesser evidence as to what do to in such instances but they will realize that there is little or no scientific support for what they are doing and should be open to change should science appear that is counter to their current view. Until such evidence exists evidence based coaches should be willing to admit that the evidence for doing this or that is not very good and more research needs to be done. It seems to me this is the Leahey approach. To do otherwise suggests one really isn't an evidence based coach, imho.