In threads where training philosophy comes up, discussions of TRIMPS and TSS and other training models occasionally intrude. These models are not very well known, and even more poorly understood, so probably SolarEnergy, qbrain and I are just talking to each other and killing threads in those conversations. In any case, I figured I would present a brief overview of what it is that we're talking about when this terminology starts showing up.
Best case, this will introduce these models to the subset of swimmers (or coaches) who would be interested enough to use them, but didn't previously know enough to do so.
Plus, even if you're not the type to be interested in quantifying your training, it can be useful to think about workouts in this general framework.
And, at the very least, this might serve as a place to discuss some of the details without worrying about driving those other threads too far off-topic.
After reading the Hellard et al paper, I'd agree that the constants obviously differ a lot from athlete to athlete. I'll have to abandon my belief that they have a biological basis. Given the (ridiculously) big spread in values for the 9 elite swimmers in the Hellard et al paper, I wonder if it says more about the fact that the model is underfit, though.
It takes me a bit to wade through all the statistics in that paper, but IIRC their contention is that the model is overspecified. In real practice overspecification is an issue as you have 5 constants in the equation, 90 days to get relevant tests and most people only do one test per week. So you have 12 data points being fit by 5 constants.
As for why the constants vary so widely between people. I take it as the mathematical manifestation of what we already knew, some people recover more quickly than others, easy as that. If you want to be more clever you might say that recovery is a function of many systems, neurological, muscular, glycogen recovery etc, all with different time constants in different people and furthermore that in different people, different systems are rate limiting; all leading to wide variations.
Periodic performance tests are definitely a problem. Especially for swimming: I can get close to my current running PR in a self-timed tempo run. I can never come anywhere close to a current swimming PR in practice, even when I dive from the blocks and have someone timing me. You could say that consistency in the performance test is all that's important, but then you can't include meet times in the model (when that was the entire point!).
I assume that when I test well in practice I will swim well in the meet. To me the point is to figure out when to maximize and minimize training, not predict meet times.
Plus, to make sure you've got enough data to fit the model well, you'd ideally want some performance tests after a wide variation of different training. But most of us aren't willing to suffer through goofy blocks of training just to determine the model parameters. My suspicion is that this is part of the reason the models seem to be unstable in many cases.
The model tends to be quite stable when the number of data points is above 8, throw out one test and you get pretty close to the same result, in my experience and in the experience of the Hellard et al authors.
The issue of variation is one that can catch you. I have varied the day of the week on which tests are given to try and get around it. Particularly if every monday is distance day, and you test every tuesday you might not get much variation. There is also the pace learning aspect of it, we get in our head that all out = this pace. And so week after week we might hit the same pace. I have definitely had that problem.
After reading the Hellard et al paper, I'd agree that the constants obviously differ a lot from athlete to athlete. I'll have to abandon my belief that they have a biological basis. Given the (ridiculously) big spread in values for the 9 elite swimmers in the Hellard et al paper, I wonder if it says more about the fact that the model is underfit, though.
It takes me a bit to wade through all the statistics in that paper, but IIRC their contention is that the model is overspecified. In real practice overspecification is an issue as you have 5 constants in the equation, 90 days to get relevant tests and most people only do one test per week. So you have 12 data points being fit by 5 constants.
As for why the constants vary so widely between people. I take it as the mathematical manifestation of what we already knew, some people recover more quickly than others, easy as that. If you want to be more clever you might say that recovery is a function of many systems, neurological, muscular, glycogen recovery etc, all with different time constants in different people and furthermore that in different people, different systems are rate limiting; all leading to wide variations.
Periodic performance tests are definitely a problem. Especially for swimming: I can get close to my current running PR in a self-timed tempo run. I can never come anywhere close to a current swimming PR in practice, even when I dive from the blocks and have someone timing me. You could say that consistency in the performance test is all that's important, but then you can't include meet times in the model (when that was the entire point!).
I assume that when I test well in practice I will swim well in the meet. To me the point is to figure out when to maximize and minimize training, not predict meet times.
Plus, to make sure you've got enough data to fit the model well, you'd ideally want some performance tests after a wide variation of different training. But most of us aren't willing to suffer through goofy blocks of training just to determine the model parameters. My suspicion is that this is part of the reason the models seem to be unstable in many cases.
The model tends to be quite stable when the number of data points is above 8, throw out one test and you get pretty close to the same result, in my experience and in the experience of the Hellard et al authors.
The issue of variation is one that can catch you. I have varied the day of the week on which tests are given to try and get around it. Particularly if every monday is distance day, and you test every tuesday you might not get much variation. There is also the pace learning aspect of it, we get in our head that all out = this pace. And so week after week we might hit the same pace. I have definitely had that problem.