[quote]on edge wrote:
I don’t know what the story is with this lady but I did just review the study and I conclude there’s no reliable conclusion to make based off the results. Funding is always a challenge but in my opinion this study should be looked at as a trial run to see where improvements need to be made in the model.
[/quote]
As you might expect, I have a few opinions. It’s going to require a few journeys into technical jargon, and I sincerely hope that you will read it in good faith. I will try to keep the tone educational, not condescending, and apologize in advance if anything comes off as too dickish.
Something else worth noting: I am a cardiovascular epidemiologist. I make zero money from vaccines and pharmaceutical companies in general; my salary is paid by the University of Pittsburgh. So despite what any of y’all might think, I am an entirely neutral party. My business is improving public health. As a statistician, I am often cursed by PI’s for telling them that no, sorry, their results DO NOT support their hypothesis, and I will NOT sign off on a paper saying otherwise just because it would help their career. I am a slave to the truth and the truth only, not to someone’s agenda demonstrating Medication X is better than Y.
[quote]on edge wrote:
This statement caught my eye:
“Children who had been exposed to higher levels of thimerosal were more likely to have mothers with higher IQ scores and levels of education and to be from two-parent households where English was the primary language spoken.”
I think this statement likely reveals a fundamental flaw that could explain most of the positive results seen from mercury exposure but maybe not explain the negative results.
[/quote]
This, unfortunately, is a reality of doing observational studies. The only way to eliminate such things is actually assigning the treatment, known as a randomized controlled trial. We have already discussed the reasons why a randomized controlled trial for vaccines is impossible for ethical reasons, and it seems likely that no parent would agree to participate from either pro- or anti-vaccine camp.
So we’re left with an observational study. Yes, observational studies will have confounding variables that have a relationship with both the “exposure” of interest (mercury) and the outcomes of interest. Common problem in pretty much any diet study, really, is that people who eat certain foods also invariably tend to share behavioral patterns.
Fortunately, there’s something we can do about this. It’s called multivariate regression modeling, which allows us to assess the relationship between an exposure and an outcome when adjusting for potential confounders. The math is as follows:
The basic model to assess relationships between mercury and cognitive outcomes would be:
(Cognitive Outcome) = Intercept + Coefficient*(Mercury Exposure)
The combination of intercept and coefficient that results in the best fit of the data (defined as the lowest sum of the squared errors) is what we determine to be the final model. Suppose that the “coefficient” is -1. That means that every 1-point increase in mercury exposure is associated with a 1-point drop in the cognitive outcome.
The multivariate model “adjusting” for age is:
(Cognitive Outcome) = Intercept + Coefficient*(Mercury Exposure) + Coefficient*(Age)
This puts in an effect of the relationship between age and outcome. When you adjust for potential confounders in a multivariate model, the regression coefficient for mercury exposure illustrates the “independent” relationship between the exposure of interest and the outcome. So…when you say…
[quote]on edge wrote:
If the kids who come from a white, middle class demographic, where there are higher rates of vaccination, are more likely to be enrolled in sporting activities like baseball, basketball & football. Wouldn’t it make sense that they would perform better on a non-dominant hand peg board test? Immigrant families play soccer if they do anything at all.
Wouldn’t we also expect to see the kids of 2-parent families where both parents speak english score higher on letter & word identification and speed naming?
[/quote]
Your intuition is right: we WOULD expect to see better performance of kids from 2-parent familes, with higher SES, etc. But we can ADJUST for those things in the model. With the aforementioned explanation of statistical analysis in mind, please let me draw your attention to the following footnote from Table 2:
“Independent variables in the full model were as follows: measures of cumulative exposure prenatally, from birth to 1 month, and from 1 to 7 months; age; sex; HMO; maternal IQ; family income (expressed as a percentage of the poverty line); maternal education level; single-parent status; score on the Home Observation for Measurement of the Environment scale; and other covariates if they met criteria for inclusion in the full model.”
Summary: they adjusted for all the stuff that you pointed to which could explain the relationships. Any relationships observed between mercury and cognitive outcomes can’t be explained away by differences in maternal IQ, or family income, or education, or single-parent vs. 2-parent status…those things are adjusted for in all of the multivariate models.
(In fact, that’s the entire reason we do those initial analyses to see whether mercury exposure is associated with any potential confounders; that’s how we decide what to include in the multivariate model)
[quote]on edge wrote:
If you take those results away we are left with lower vaccination rates = better behavior regulation, fewer phonic & facial tics and backward digit recall (yes I just painted with some broad strokes but that’s the predominant result from their 3 categories).
[/quote]
Hold this thought for a second. I want to address the fallacy of dismissing the positive results and only looking at the negative ones with a lengthier explanation below.
[quote]on edge wrote:
Since mercury is a neurotoxin I think these later results are more likely the result of mercury exposure than the former, positive results, are likely to be attributable to mercury. No one thinks the stuff is actually healthful.[/quote]
Agreed, no one thinks the stuff is actually healthful.
And now you must indulge me in another aside. When you run a large number of statistical tests, you are going to get some false-positive results by sheer chance. To use a simple example, suppose that I give you a fair coin and ask you to toss it five times. The probably of getting five straight heads is 1/32 = about 3.1 percent.
Now suppose that I give 100 different people fair coins and tell them all to toss five times. We would expect three of them to get five straight heads. That doesn’t mean their coins were unfair - it means that if we test something enough times, we are bound to get a few results showing a “relationship” when none exists.
Back to this paper. From the Statistical Analysis section:
“All tests were two-tailed; statistical significance was set at P<0.05 without correction for the number of statistical tests performed.”
In layman’s terms, this means that to be extra conservative and pick up any POSSIBLE relationships between mercury and cognitive outcomes, they would consider it a statistically “significant” result if there was less than a 5% probability of the result occurring by chance alone (ugh, even this simplified explanation of a p-value makes me cringe, but there’s no way to do it better without turning this into pages and pages). They looked at a LOT of tests. If mercury had zero cognitive effects, we would have expected to see relationships between mercury exposure and a few of the cognitive tests (in both directions, positive and negative) by chance alone.
So: nobody thinks that mercury exposure is a GOOD thing. The reason they mentioned that mercury exposure was associated with a few positive effects AND a few negative effects NOT to illustrate that mercury is “good” but rather to show that there were spurious relationships that went in BOTH directions. If mercury exposure was associated with worse cognitive function, we would ONLY see those results going in one direction (i.e. a lot of tests with no significant relationship, a few tests with significant NEGATIVE relationships, and zero tests with significant POSITIVE relationships). Since we see mostly NULL results (no relationship) with just a handful of positives and a handful of negatives, we’re really, really, really, really, really stretching to say that mercury has a significant detrimental effect on cognitive function.
So no, you don’t get to just dismiss the positive results because “nobody thinks the stuff is healthful” and count the negative ones. That isn’t how science works. The point isn’t that the positive associations are “real” so much as that the “significant” associations between mercury and a very large battery of cognitive outcomes are equally divided between positive and negative - which means that, in all likelihood, MERCURY EXPOSURE FROM VACCINES DOES NOT HAVE ANY COGNITIVE EFFECTS, GOOD OR BAD.
So, in summation:
-
any relationships between mercury and cognitive outcomes were NOT due to differences in maternal IQ, socioeconomic status, etc.
-
dismissing the positive results and focusing on the negative ones is a fallacious way of doing science.
Final statement:
Look, man, you’re well-intentioned and I am, sincerely, trying to be patient and helpful. But when a layperson says “I have examined the data and conclude there’s no reliable conclusion to be drawn from the results” - please keep in mind that this was published in the New England Journal of Medicine, which is the place to publish in medical research, and it was probably reviewed by a handful of the smartest people they could find knowing that this is an incredibly sensitive and explosive topic. It’s somewhere between “well-intentioned but misguided” and “extremely arrogant” to think that you, the lone wolf, are going to read this paper and find THE ONE THING that a slew of PhD epidemiologists and biostatisticians FORGOT in their study design!
Are scientists infallible? NO, of course not! But for you to say “Wait, I found it! They didn’t remember to account for this thingy!” when they actually DID account for those thingies can be a teachable moment here. The people who do this research aren’t dumbasses. It’s not their first rodeo. And even if they were dumbasses, this had to pass peer review for a REAL journal, with credibility, not just get posted on some quack with an agenda’s website. Please double-check before you assume that they missed something, or ask someone who DOES know the science well enough, instead of trying to read the paper and draw your own conclusions.
*Ugh, not to be a dick, but I really can’t emphasize this enough: this is why we have “experts” in certain fields. You wouldn’t let your next-door neighbor do your heart surgery, unless you knew he was a heart surgeon. You wouldn’t let your next-door neighbor build you a car, unless you knew he was a qualified mechanic. So why on Earth would you ask a layperson with no formal education in the area to analyze and interpret the data from a scientific study? I spent just as much time training to do “this” -analyze, pick apart, and interpret studies - as surgeons spend training to do surgery.
Yes, it’s healthy to occasionally prod and question the experts and make sure that their conclusions are sound. And a good “expert” should be able to respond in a reasonable way. But there’s a point when the experts, unfortunately, get tired of explaining something.