Quantifying training

Former Member
Former Member
In threads where training philosophy comes up, discussions of TRIMPS and TSS and other training models occasionally intrude. These models are not very well known, and even more poorly understood, so probably SolarEnergy, qbrain and I are just talking to each other and killing threads in those conversations. In any case, I figured I would present a brief overview of what it is that we're talking about when this terminology starts showing up. Best case, this will introduce these models to the subset of swimmers (or coaches) who would be interested enough to use them, but didn't previously know enough to do so. Plus, even if you're not the type to be interested in quantifying your training, it can be useful to think about workouts in this general framework. And, at the very least, this might serve as a place to discuss some of the details without worrying about driving those other threads too far off-topic.
Parents
  • Former Member
    Former Member
    I assume that when I test well in practice I will swim well in the meet. To me the point is to figure out when to maximize and minimize training, not predict meet times. Yes, that's a good point. But you can only use meet times or practice performance tests to calibrate the model -- not both. The model tends to be quite stable when the number of data points is above 8, throw out one test and you get pretty close to the same result, in my experience and in the experience of the Hellard et al authors. Okay, I guess I meant ill-conditioned, not unstable. The same as your point that the models are overspecified. I suppose I'm not surprised that the parameters are all correlated. Someone who recovers quickly (small t2) probably also improves quickly (small t1). But I still wonder if some of that apparent ill-conditioning isn't due to the fact that the model is being fit to a very narrow slice of the input space.
Reply
  • Former Member
    Former Member
    I assume that when I test well in practice I will swim well in the meet. To me the point is to figure out when to maximize and minimize training, not predict meet times. Yes, that's a good point. But you can only use meet times or practice performance tests to calibrate the model -- not both. The model tends to be quite stable when the number of data points is above 8, throw out one test and you get pretty close to the same result, in my experience and in the experience of the Hellard et al authors. Okay, I guess I meant ill-conditioned, not unstable. The same as your point that the models are overspecified. I suppose I'm not surprised that the parameters are all correlated. Someone who recovers quickly (small t2) probably also improves quickly (small t1). But I still wonder if some of that apparent ill-conditioning isn't due to the fact that the model is being fit to a very narrow slice of the input space.
Children
No Data