LZR - It's Faster, but by how much ?

Former Member
Former Member
After seeing a woman break 24 seconds and I think we can stop the discussion of "IF" the LZR suit is faster and start thinking "how much faster". The previous line of suits (Fastskin and so on) were pretty similiar to a shaved swimmer. Sure - they do feel like they make you float, but overall the times seemed to move along "in line" with what I would expect to see in terms of improvements in the sport. If the previous suits would have been that much faster than shaving, you would have never seen people just using the legskins. By the way - for us Masters swimmers there was always the added benefit of keeping in all the "extra layers of skin". So how much faster are the LZR suits ? If I had to guess based on the results so far, I would say 0.25 to 0.30 per 50 and double that for the 100. I can see the Bernard going 48 low in the 100 and I can see Sullivan getting close or just breaking the 50 record. It makes sense that Libby Lenton would swim a 24.2 or so in the 50. I think one of the top regular teams out there should do a test - you need a good amount of world class swimmers training together to be able to do a test. Here is the test I would propose: 8-10 swimmers 2 days of testing 4x50 on 10 minutes all out Day 1 - swim 2 with a Fastskin2 followed by 2 with the LZR Day 2 - swim 2 with the LZR followed by 2 with the Fastskin2 Get the averages of all 10 swimmers - maybe drop the high and low and there you go. Why do the test ? I would HAVE to know. Swimming is a big part of your life and you just set a massive PR using this new technology - my very first question would be " How much was me and how much was the suit?"?
Parents
  • (I'm about to go way off topic.) Chris, Statistical significance testing gives information on the probability of a Type I error only under the assumption that the null hypothesis is true, which is quite different from saying that it gives the probability of a Type I error given a certain set of results. I'll give an example to illustrate the difference. Say I create an experiment to find the effect of suit color on swim performance. I have one group of swimmers that wears green old school briefs, and another group that wears blue old school briefs. The green group swims faster than the blue group, with p's the probability that green suits cause people to swim faster than blue suits? It's damn near close to zero, because there's no reason at all to believe that green speedos make people swim faster. It's not, as you appear to be suggesting, 95%. That number, rather, represents how often we will get a time difference smaller than the one we got if we repeated the experiment several times. I think that what Chris proposed was to set up an experiment and test the null hypothesis that "the suits have no effect" and use a straight frequentist approach to analyzing the data. Given that framework, he's not wrong. You seem to be arguing some other Bayesian interpretation, maybe based originally on the numbers given by hoch. You and he are arguing two quite different approaches.
Reply
  • (I'm about to go way off topic.) Chris, Statistical significance testing gives information on the probability of a Type I error only under the assumption that the null hypothesis is true, which is quite different from saying that it gives the probability of a Type I error given a certain set of results. I'll give an example to illustrate the difference. Say I create an experiment to find the effect of suit color on swim performance. I have one group of swimmers that wears green old school briefs, and another group that wears blue old school briefs. The green group swims faster than the blue group, with p's the probability that green suits cause people to swim faster than blue suits? It's damn near close to zero, because there's no reason at all to believe that green speedos make people swim faster. It's not, as you appear to be suggesting, 95%. That number, rather, represents how often we will get a time difference smaller than the one we got if we repeated the experiment several times. I think that what Chris proposed was to set up an experiment and test the null hypothesis that "the suits have no effect" and use a straight frequentist approach to analyzing the data. Given that framework, he's not wrong. You seem to be arguing some other Bayesian interpretation, maybe based originally on the numbers given by hoch. You and he are arguing two quite different approaches.
Children
No Data