I'm not comfortable enough with my physiology knowledge to really contribute to this discussion. But as a statistician I'd like to add a general remark.
Vayerism said:
I've also not claimed a high level of accuracy, sticking "est" next to everything that is estimated or worse "Circa".
That's fair enough and I've also not seen any disingenuous claims from you about accuracy elsewhere. But still it's a bit misleading. And I'm not trying to school you because I'm sure you're perfectly aware of this. But when estimates from data are discussed in public I think it's worthwhile to remind us all of some general characteristics from time to time.
If we consider simple straighforward statements like the following:
Vayerism said:
1 point per week is fairly typical for a GT, see Horner, Rasmussen or Hamilton (in reference to why its three bags/tour). Though it varies obviously. It can also be 10-12% with the majority in the first week, as you wouldn't expect it to continue to drop on a continual standard line.
So it stands to reason, though without the actual figure nothing more than that. That if Froome is at 15.3 a week and a half in you can add 1.5 to the figure giving 16.8 at the start of the tour.
We have a measured value (15.3) and a rough estimate for a rate of change ("1 point per week is fairly typical for a GT"). So we start with the measured value and add the product of number of days and rate of change to get an estimate for the initial value (16.8). So this is really as easy as it gets to compute a point-estimate. But don't forget that all of these involved quantities have some level of uncertainty attached to them (even the number of days, think about it). Especially the estimate for the rate is obtained in a way that its error bars must be huge.
Now if we mix these quantities together the way we do it, the uncertainty level of the result goes through the roof! And that's not just an academic subtlety with no meaning in the "real world". Failing to consider it in the appropriate way can hurt your reasoning badly (and might get you fired as a quantitative professional

).
By reporting numbers like 16.8 (without additional information) you implicitly state that you can control your results plus/minus 0.1, which most certainly you don't. And even if you add "(est)" or "circa" to it, this is still the impression that most people will take away. Nobody who reads this will sit in front of the monitor saying to himself something like: "This value could probably also be 16.2." We just don't.
And if we then go on to interpret the result by comparing it to other numbers and making statements like "X is bigger then Y" or "X is abnormally big" it gets even more crucial. Comparisons like these necessarily have to take the level of uncertainty into account that you're operation on. Otherwise they're probably meaningless.
Could you distinguish 16.8 from 16.7 or 16.9? Unlikely. What about 16.6 or 17.0? Can your method based on few data, many assumptions and simple calculations reliably distinguish 16.8 from 16.4? I don't know. Would it be necessary to have this precision in your analysis to make any statements at all? Your call.
Error estimates like these are admittedly very difficult and sometimes even infeasible. But the uncertainty levels in play can approach the interesting orders of magnitude (that are relevant for comparisons and conclusions) awfully fast when you're playing around with data like this. Just keep this in the back of your head.