LZR - It's Faster, but by how much ?

Former Member
Former Member
After seeing a woman break 24 seconds and I think we can stop the discussion of "IF" the LZR suit is faster and start thinking "how much faster". The previous line of suits (Fastskin and so on) were pretty similiar to a shaved swimmer. Sure - they do feel like they make you float, but overall the times seemed to move along "in line" with what I would expect to see in terms of improvements in the sport. If the previous suits would have been that much faster than shaving, you would have never seen people just using the legskins. By the way - for us Masters swimmers there was always the added benefit of keeping in all the "extra layers of skin". So how much faster are the LZR suits ? If I had to guess based on the results so far, I would say 0.25 to 0.30 per 50 and double that for the 100. I can see the Bernard going 48 low in the 100 and I can see Sullivan getting close or just breaking the 50 record. It makes sense that Libby Lenton would swim a 24.2 or so in the 50. I think one of the top regular teams out there should do a test - you need a good amount of world class swimmers training together to be able to do a test. Here is the test I would propose: 8-10 swimmers 2 days of testing 4x50 on 10 minutes all out Day 1 - swim 2 with a Fastskin2 followed by 2 with the LZR Day 2 - swim 2 with the LZR followed by 2 with the Fastskin2 Get the averages of all 10 swimmers - maybe drop the high and low and there you go. Why do the test ? I would HAVE to know. Swimming is a big part of your life and you just set a massive PR using this new technology - my very first question would be " How much was me and how much was the suit?"?
Parents
  • Former Member
    Former Member
    Gaash...of course it is about probabilities. That is the entire basis of hypothesis testing, which is what we are really talking about. The bar for "scientific certainty" is commonly taken to be 95% probability, meaning a 5% chance of a false positive (in this case, incorrectly assuming the LZR has an effect when it really doesn't). There's nothing special about the 5%, really. You're referring to a p-value, right? That doesn't give you the probability of a false positive, it gives you the probability that, due to random variation, an effect size as large as the one found in a sample would occur if the actual effect size were zero. Each time you compute a p-value with a sample, it says nothing about the probability that the effect is real unless you factor in your prior probability belief that it is real, based on prior data and reasoning about the mechanisms involved in the effect. Of course, there are no studies, so there are no effect sizes or p-values to interpret.
Reply
  • Former Member
    Former Member
    Gaash...of course it is about probabilities. That is the entire basis of hypothesis testing, which is what we are really talking about. The bar for "scientific certainty" is commonly taken to be 95% probability, meaning a 5% chance of a false positive (in this case, incorrectly assuming the LZR has an effect when it really doesn't). There's nothing special about the 5%, really. You're referring to a p-value, right? That doesn't give you the probability of a false positive, it gives you the probability that, due to random variation, an effect size as large as the one found in a sample would occur if the actual effect size were zero. Each time you compute a p-value with a sample, it says nothing about the probability that the effect is real unless you factor in your prior probability belief that it is real, based on prior data and reasoning about the mechanisms involved in the effect. Of course, there are no studies, so there are no effect sizes or p-values to interpret.
Children
No Data