Franklin said:
No, you can use statistics to get to set a probability. If I drop a stone a 100 time, what's the probability it also drops the 101 time? Observation. A scientific tool.
If I flip a coin 100 times and get heads 100 times, what´s the probability to get head the 101 time?
Your example is not about statistics or probability, it is about gravity.
***
I do not care about Froome and my comment would suit more into other thread like ""clean", "suspect", "miraculous" and "mutants". But my point is: judgements based on indirect evidence are extremly prone to thinking errors and different kind of (heuristic, confirmation) biases. Decades of research in psychology have proved it. Majority of these errors do not come from education or lack of it, but are evolutionary, they are wired in our brains - thats why even experts and educated are biased.
I will make couple of hypothetical examples how I can easily change your probability estimates.
Franklin 1, Franklin 2 and Franklin 3 have to make probablity estimate about Froome. What is the probability that he doped? All 3 Franklins are identical, same knowledge, same information. Only difference is that before making their estimation Franklin 1 sees photos of positive things (smiling people, furry kittens, whatever), Franklin 2 sees photos of negative things (drugs, guns, whatever), Franklin 3 sees nothing. I can guarantee you that all 3 probability estimates are different: Franklin 1 gives lowest, Franklin 2 highest and Franklin 3 somewhere between.
Or to make it more relevant. Same set. Again 3 identical Franklins. But before making estimates researcher asks Franklin 1 question: "What do you think, probability of Froome as doper is it higher or lower than 25%?"
Franklin 2 gets - higher or lower than 90%. Franklin 3 gets third anchor - 50%, higher, or lower. Again I can guarantee that after hearing these questions 3 Franklins give different probabilities, first lowest, second highest, third somewhere between.
Overall there are so many psychological effects in work all the time, creating different biases all the time, that I cannot even count them. Instead, I suggest you to read Daniel Kahnemann´s "Thinking, Fast and Slow". His a Nobel prize winner, whos research more than 40 years ago actually started from a puzzle: Kahnemann thought that he is good intuitive statistician (he was also trained in statistics), but in his work he discovered over time that actually he is very bad statistician...
Or lets make another example. You know Greg LeMond, but do you remember that after his demise he was trying explain it in may ways: I had mental problems, I had unexplained fatique, I never recovered from shotgun accident. He told several stories, all seemed plausible, people listened and nodded.
I took years before LeMond embraced another (and this time correct) explanation - new and powerful drug arrived. But why didnt he see it eralier? He was an expert, insider, he saw wattages, he saw how fat asses suddenly started to climb, he saw ascent times etc. And again in 1999 when LA suddenly won, LeMond welcomed him as clean winner. Again why, why, why LeMond didnt see it? He is an expert, all the cues where there, ascent times, nonclimber sprinting up the mountains etc. Again, reason are biases, many different and unaivodable biases.
Today, situation is reversed. We know what has happened during last 20 years and it will influence all our judgements, we are building stories and explanations based on this, we are seeing and interpreting causalities based on this. When Vayer gives his data from 1989-2011
http://forum.cyclingnews.com/showthread.php?t=20803 we are looking at it and seems very convincing. But actually it is not. Thats why I showed my skepeticism in another thread why numbers what Vayer gives help us very little to make judgements about current crop of riders.