Quantifying training

Former Member
Former Member
In threads where training philosophy comes up, discussions of TRIMPS and TSS and other training models occasionally intrude. These models are not very well known, and even more poorly understood, so probably SolarEnergy, qbrain and I are just talking to each other and killing threads in those conversations. In any case, I figured I would present a brief overview of what it is that we're talking about when this terminology starts showing up. Best case, this will introduce these models to the subset of swimmers (or coaches) who would be interested enough to use them, but didn't previously know enough to do so. Plus, even if you're not the type to be interested in quantifying your training, it can be useful to think about workouts in this general framework. And, at the very least, this might serve as a place to discuss some of the details without worrying about driving those other threads too far off-topic.
  • Former Member
    Former Member
    Race Day already factors in training test / racing test features. Not sure how though. There's even a feature allowing for predicting performances in upcoming meets based on the evolution of these tests. All that in a relatively simple to use but rather expensive package.
  • Former Member
    Former Member
    I assume that when I test well in practice I will swim well in the meet. To me the point is to figure out when to maximize and minimize training, not predict meet times. Yes, that's a good point. But you can only use meet times or practice performance tests to calibrate the model -- not both. The model tends to be quite stable when the number of data points is above 8, throw out one test and you get pretty close to the same result, in my experience and in the experience of the Hellard et al authors. Okay, I guess I meant ill-conditioned, not unstable. The same as your point that the models are overspecified. I suppose I'm not surprised that the parameters are all correlated. Someone who recovers quickly (small t2) probably also improves quickly (small t1). But I still wonder if some of that apparent ill-conditioning isn't due to the fact that the model is being fit to a very narrow slice of the input space.
  • Former Member
    Former Member
    His site is the reason I use google translate, we have been discussing these and similar issues on and off for a couple of years now. As a fellow coach and engineer, we seem ot be on the same page on many things. Well it ain't hard to be on the same page as Ale. He is incredibly pragmatic and fully committed to evidence base research. I recommend to anyone here that knows how to use Google Translate, interested in science driven training, to visit his blog. www.amtriathlon.com
  • The normal assumption is that these time constants are fairly transferable. You can (some have) fit the time constants, but it requires a lot of data and work. It's fairly reasonable to think that they should be constant. They have a fairly biological interpretation, and are related to the rates at which your body can build new mitochondria, or hemoglobin, etc. It seems reasonable to think that these biological rates would be pretty similar across individuals. I have to disagree with these statements emphatically. As shown in the Avalos and Hellard paper, the time constants are not the same from one person to the next. Even viewed in the most harsh light given the spread of parameter estimates in the Avalos paper, three out of nine people had estimates that did not cross over. Estimates in published work on weight lifters, throwers, runners bikes and swimmers all have given different time constants. To boot, the ratios of K1 to K2 for these models is always reported to "ONE" significant digit. Leading an observer to believe that the mathematical solutions are highly constrained. If the solutions were not so unconstrained, it is my opinion that the results would be even more widely diverging. In addition there is published work on multisport athletes showing that individual athletes have different time constants and gain values for the different sports. Lastly, those of us who have been using these models for multisport athletes see wide variation in the time constant from one sport to the next. As for fitting the models, it's not as hard as it might seem, the weekly or bi weekly performance test is the biggest hurdle in my opinion. Once you have that and model inputs, a little work in excel can get you where you need to go.
  • I'm still looking for a method with the right balance of detail without too much daily hassle. The least hassle might be the session RPE score. Rate how hard the session is and multiply * time. It is well regarded in the literature and seems to be quite good. Carl Foster is the original proponent I think. I have not seen much about using it as input into the Banister model and whether the answers are different. For that matter, no study I have seen anywhere has shown any given quantification method gives a different answer using any given model. Not to say it isn't true, but it hasn't been shown. So while we stand around and might say that Phil Skiba's swimscore is better than yardage as an input, it hasn't been demonstrated that they give different time constants. The same thing holds for trimps, tss, or any other method we can dream up.
  • It's done through a long process of trial and error involving reconciliation between the peaks that the graphs generated with the actual performance that was delivered. That said though, and with all due respect for the quality of the research you've been doing so far, it is also strongly recommended to use weighted averaged scoring data to these models for better accuracy, as opposed to rely solely on distance sort of inputs. Moreover, since there's a very strong sprinting over shorter distances component to most swimmers, I would certainly prioritize using weighted avg scoring data over adjusting time constant for better accuracy. As for the long process of fitting, it takes the excel solver roughly 2 seconds to find the time constants and K factors, it's really not that bad. As I mentioned, getting the performance measures seems to be the limiter for most people. Selecting the performances is also important regarding what you mentioned. In one athlete for whom I had data at 100, 200, and 500 yards, the constants were different for each distance tested. So we have to pick intelligently a test performance distance that is relevant to the athlete. That's how I have handled it before. As for yardage vs swimscore or sharp score, I am pragmatic about it. I use what is available, if only yaradage that is fine, we get good data fits with yardage as input. If I have yaradage and intensity that's fine too, I also get good fits using that as input. I'll be doing a comparative study with my masters squad this fall, if everything goes right.
  • Hit this link here and download some stuff. No link when I see it, is it Alejandro Martinez' site?
  • Former Member
    Former Member
    Kevin, welcome to the thread. FYI, the best available public paper on this is here, www.ncbi.nlm.nih.gov/.../ Very nice paper, thanks. It's great to see someone with real statistics chops trying to validate these models. The results point out what we already know: the models are a little too simple. If you take the model too literally (for example by trying a full-stop taper after your drop-dead day) then you'll discover that there is plenty of wisdom accumulated in common sense coaching techniques that is not captured in the models.
  • Former Member
    Former Member
    In the other thread you pointed out Rick Sharp's training stress score, which gives points for training time spent in different zones (aerobic, anaerobic power, lactate threshold, sprint). I had a hard time tracking down the original reference, so here's a link for anyone else that's curious: www.swimmingcoach.org/.../JSRVol9_1993.pdf I also updated the intro post in this thread to include some info on that method of assigning points to swims.
  • Former Member
    Former Member
    in fact yardage is an adequate input that in my own work has shown no difference from sharp stress scores as inputs On this question, I have changed my mind a couple of times. At one point, I gave up tracking trimps of any sort, for exactly the reason you mentioned: my training was fairly unvaried, and trimps were pretty much proportional to yards. So it didn't seem like tracking trimps was worth the extra effort. I suspect this is the case for many masters swimmers with a single coach writing workouts that are not dramatically periodized. But I switched pools a year ago, and my training mix is very different now. Plus I swim with different coaches on different days of the week, with very different styles / workouts. Even if the week-to-week balance is about the same at the masters workouts, there are other variables. For example... ...I do weekly lake swims for half the year, and they have dramatically different trimp/yard ratio from my pool swims. ...At swim meets, just counting yards gives a ridiculously poor estimate of the training load. ...Last week I was on vacation and dropped in two other teams' workouts. Since I have been playing around with "energy points" and "pain points" recently, I can tell you that one (excellent) workout with GSMS in North Myrtle Beach earned me 20% more "energy points" per yard than I'm used to, and 2.4x as many "pain points" per yard as I'm used to! Given all of these sources of variability, I have stopped relying on yardage alone, and am back to using trimps / points / scores to track training stress. I'm still looking for a method with the right balance of detail without too much daily hassle.