Do You Understand Statistics

[quote]Bronco_XIII wrote:

[quote]Powerpuff wrote:
^ One more thing. I like to tell them that I took a survey of all the women in my ballet class and found that 8 out of 10 of them prefer the color pink. So now I can assume that 8 out of 10 people prefer pink. … Yeah. They are pretty quick to point out the holes in that idea, but unfortunately that’s about the quality of a lot of the studies we hear cited on TV. Just teaching people about problems with sample selection and all the ways we can introduce bias would be a positive thing. [/quote]

That’s good shit. Stats aren’t hard to understand from a practical level, and there’s no reason kids shouldn’t be exposed at an earlier age. It’s going to be more useful in their development than reading Dickens. Keep up the good work and maybe someday we will have a future where quacks can’t make a living. [/quote]

Thanks! And you’re right. At a very practical level, it’s not rocket science. I have a math phobia, but always found stats at least at a basic level to be accessible. Understanding standard error of measurement, or standard deviations, the sort of thing I used everyday in my work.

I have the use of a social science lab with a really nice computer network. We have the kids play a video game that they are unfamiliar with. It doesn’t require a lot of dexterity, just some clicking with a mouse. We’ll have them use their right hand, Then after a few trials, have them switch to using their left hand and let them see how they are all better when using their left. Even junior high kids will see that maybe the fact that they learned how to play the game effected their scores more than handedness. They start to get an idea about confounding variables and about the kinds of things that make a good experimental design.

Teaching about surveys, I like to ask them how many of them wash their hands every time they use the restroom. FYI, junior high kids will laugh at anything that has to do with the bathroom, but they are quick to point out reasons why they might exaggerate or lie in that situation and how that can effect results.

I let them brainstorm other ways to study hand washing behavior. Then we set up a bathroom cam. KIDDING! But they always think of that idea, and I have to crush them with all the ethical concerns of filming people in public restrooms. They are super creative. I love those kids.

[quote]CroatianRage wrote:

[quote]ActivitiesGuy wrote:

[quote]CroatianRage wrote:
What do you think of issues regarding submission bias? Where if a study doesn’t say what you want it to you don’t submit it. Is this a real problem or do you think it’s an exaggerated problem?
[/quote]

Absolutely, it’s a real problem.

Negative studies provide information just like positive studies, but if negative studies are not published, we only see the positive studies, and continue to pursue a particular treatment when there might be just as much evidence against it as there is evidence for it.

Related: over-reliance on and incorrect interpretation of p-values. A p-value is a useful tool, and absolutely a piece of the puzzle in study results. But people look at them as this magical catch-all, where if p<0.05 that means our result is “significant” and if p>0.05 it’s “not significant.”

A p-value is, in a nutshell, the probability that the observed set of outcomes would have occurred under the null hypothesis.

Suppose that I want to test a hypothesis that professional football players can bench press more than construction workers. I take a random sample of professional football players and a random sample of construction workers, and I test the max bench press of both groups. I calculate the mean and standard deviation in each group, perform a t-test, and get a p-value of 0.05. That means that, on average, if the distribution of max bench press in professional football players is the same as the distribution of max bench press in construction workers, I would have seen a difference “as large or larger” than the one observed in my study 5% of the time.

So, if professional football players and construction workers really DO bench the same thing (on average, in the entire population), once every 20 times I will perform that study and get a p-value less than 0.05, which is the arbitrarily chosen line most people use for statistical significance in scientific studies. This is a nuance worth understanding that most folks choose to ignore. They just see p<0.05 and say “Great, there’s a significant difference between the groups, let’s publish the paper.” [/quote]

Thank you. I understand this is a problem but was unsure if it was exaggerated or not. A huge beef I have with the scientific method (not really the method, more the researchers) is it’s often bastardized to try to prove a hypothesis instead of testing it. For example, if your drug does well in one clinical trial and fails in 10 previous trials then you go on to only publish the success it is entirely misleading to the public and a black eye to to the scientific community.
[/quote]

There are a few safeguards in place for this now. NIH-funded clinical trials must be registered and their results must be made publicly available. But you’ll hear an awful lot more about the positive studies than you will about the negative ones.

[quote]ActivitiesGuy wrote:

[quote]CroatianRage wrote:
What do you think of issues regarding submission bias? Where if a study doesn’t say what you want it to you don’t submit it. Is this a real problem or do you think it’s an exaggerated problem?
[/quote]

Absolutely, it’s a real problem.

Negative studies provide information just like positive studies, but if negative studies are not published, we only see the positive studies, and continue to pursue a particular treatment when there might be just as much evidence against it as there is evidence for it.

Related: over-reliance on and incorrect interpretation of p-values. A p-value is a useful tool, and absolutely a piece of the puzzle in study results. But people look at them as this magical catch-all, where if p<0.05 that means our result is “significant” and if p>0.05 it’s “not significant.”

A p-value is, in a nutshell, the probability that the observed set of outcomes would have occurred under the null hypothesis.

Suppose that I want to test a hypothesis that professional football players can bench press more than construction workers. I take a random sample of professional football players and a random sample of construction workers, and I test the max bench press of both groups. I calculate the mean and standard deviation in each group, perform a t-test, and get a p-value of 0.05. That means that, on average, if the distribution of max bench press in professional football players is the same as the distribution of max bench press in construction workers, I would have seen a difference “as large or larger” than the one observed in my study 5% of the time.

So, if professional football players and construction workers really DO bench the same thing (on average, in the entire population), once every 20 times I will perform that study and get a p-value less than 0.05, which is the arbitrarily chosen line most people use for statistical significance in scientific studies. This is a nuance worth understanding that most folks choose to ignore. They just see p<0.05 and say “Great, there’s a significant difference between the groups, let’s publish the paper.” [/quote]

Another point on the p-value which gets overlooked often is it that because all it indicates is the difference between means relative to the sample size and variance, is that a p-value of .05 or below (and realistically the bar should be set higher than .95 confidence) in the initial study only indicates more research should be done. This essentially means, there might be something there, repeat these studies with the same if not better methods and see if there is a similar difference between means, otherwise even in an experimental design, it is not in good practice to draw causation, unless the study was done extremely well.

A good example of this is the study that kicked of the non-celiac gluten intolerance trend. The statistical analysis of that study only indicated it should be repeated, preferentially with a larger sample and more control. To the author’s credit he did do just that, and did not find the same effects. Alas the damage was already done.

[quote]Bronco_XIII wrote:

[quote]Powerpuff wrote:
^ One more thing. I like to tell them that I took a survey of all the women in my ballet class and found that 8 out of 10 of them prefer the color pink. So now I can assume that 8 out of 10 people prefer pink. … Yeah. They are pretty quick to point out the holes in that idea, but unfortunately that’s about the quality of a lot of the studies we hear cited on TV. Just teaching people about problems with sample selection and all the ways we can introduce bias would be a positive thing. [/quote]

That’s good shit. Stats aren’t hard to understand from a practical level, and there’s no reason kids shouldn’t be exposed at an earlier age. It’s going to be more useful in their development than reading Dickens. Keep up the good work and maybe someday we will have a future where quacks can’t make a living. [/quote]

Yeah, who needs the great books? They couldn’t possibly have any positive impact upon the intellectual development of countless individuals. Idiot.

[quote]Bismark wrote:

[quote]Bronco_XIII wrote:

[quote]Powerpuff wrote:
^ One more thing. I like to tell them that I took a survey of all the women in my ballet class and found that 8 out of 10 of them prefer the color pink. So now I can assume that 8 out of 10 people prefer pink. … Yeah. They are pretty quick to point out the holes in that idea, but unfortunately that’s about the quality of a lot of the studies we hear cited on TV. Just teaching people about problems with sample selection and all the ways we can introduce bias would be a positive thing. [/quote]

That’s good shit. Stats aren’t hard to understand from a practical level, and there’s no reason kids shouldn’t be exposed at an earlier age. It’s going to be more useful in their development than reading Dickens. Keep up the good work and maybe someday we will have a future where quacks can’t make a living. [/quote]

Yeah, who needs the great books? They couldn’t possibly have any positive impact upon the intellectual development of countless individuals. Idiot. [/quote]

Bookish library nerds take on the Mathletes. This should be good.

[quote]Powerpuff wrote:
It’s often telling to see who funded the “study.”

[/quote]

It is telling, but I sometimes think people use this to excuse for everything and assume that pretty much every study had a vested funder, ie. NIH funds a study on nutrition and people assume that government has a relationship w/ “big Ag” that concludes we should eat lots of veggies (or grains, whatever) and then dismisses EVERY study ever because they’re all biased.

Also, I believe most of what we know about stress (the guy that did most of the initial work was nominated for a Nobel) was funded by the tobacco industry to fight against regulation. Yes, they had a vested interest, but yes, constant stress is harmful.

I’ve taken stat and econometrics classes off an on for years. Like csulli, it’s pretty much all forgotten. It takes like a month, and it’s gone. I still do a little tutoring in biostat and epi, but it’s more just figuring it out and explaining it than knowing it.

[quote]Powerpuff wrote:
In fact, I TFriended him so I could read his posts more easily, but he never responded to my friend request, so “Screw you, Anonym!” :)[/quote]

Fixed!

Those here unfamiliar with such things but wanting to get more experience might find “Studying the Study and Testing the Test” to be a good introduction to the topic.

[quote]anonym wrote:

[quote]Powerpuff wrote:
In fact, I TFriended him so I could read his posts more easily, but he never responded to my friend request, so “Screw you, Anonym!” :)[/quote]

Fixed!

[/quote]

Well, that was pretty funny and embarrassing. I take back my harsh rhetoric, Anonym. Nice to see ya.

Thanks for the recommendation.

[quote]Bismark wrote:

[quote]Bronco_XIII wrote:

[quote]Powerpuff wrote:
^ One more thing. I like to tell them that I took a survey of all the women in my ballet class and found that 8 out of 10 of them prefer the color pink. So now I can assume that 8 out of 10 people prefer pink. … Yeah. They are pretty quick to point out the holes in that idea, but unfortunately that’s about the quality of a lot of the studies we hear cited on TV. Just teaching people about problems with sample selection and all the ways we can introduce bias would be a positive thing. [/quote]

That’s good shit. Stats aren’t hard to understand from a practical level, and there’s no reason kids shouldn’t be exposed at an earlier age. It’s going to be more useful in their development than reading Dickens. Keep up the good work and maybe someday we will have a future where quacks can’t make a living. [/quote]

Yeah, who needs the great books? They couldn’t possibly have any positive impact upon the intellectual development of countless individuals. Idiot. [/quote]

The comparison relates that children are exposed to literature of an antiquated style in middle school, yet stats are deemed too abstract a concept until college in many cases, despite their relatively simplistic concepts and extreme practical application. Secondly, the argument was that an understanding of statistics is more useful to future generations than reading classic literature, not that classic literature has no value. Read more scientific work, it will help you cut down the superfluous modifiers.

I dove deep into the statistics of the Italian cardiac study that looked at the effects of fish oil. (I have a MS in Statistics). I’ve taken fish oil every day since then. (Biotests version, of course) I’m not for government mandates, but if Michelle Obama wanted to force everyone to take fish oil I might agree to it.

I never trust anyone’s conclusions unless I see the data for myself. I’ve looked at raw data provided by MIT regarding “global warming”, carbon, etc., and it clearly shows that Al Gore is full of sh**. I do wish I could scam hundreds of millions of dollars from the system like he has, though.

[quote]fred99 wrote:
I dove deep into the statistics of the Italian cardiac study that looked at the effects of fish oil. (I have a MS in Statistics). I’ve taken fish oil every day since then. (Biotests version, of course) I’m not for government mandates, but if Michelle Obama wanted to force everyone to take fish oil I might agree to it.

I never trust anyone’s conclusions unless I see the data for myself. I’ve looked at raw data provided by MIT regarding “global warming”, carbon, etc., and it clearly shows that Al Gore is full of sh**. I do wish I could scam hundreds of millions of dollars from the system like he has, though.

[/quote]

That’s the principal benefit of understanding statistics. You can use information presented from fields beyond your own expertise to form opinions of the conclusions, rather than rely on potentially biased or uninformed advice.