Shares

This pretty much sums up how RFK, Jr. looks and sounds when he's talking about vaccines.

This pretty much sums up how RFK, Jr. looks and sounds when he’s talking about vaccines.

It always amuses me how antivaccine activists have such a love-hate relationship with academia, particularly the higher echelons of academia. On the one hand, they routinely denigrate academics because inevitably well-designed, well-executed epidemiological studies testing the hypothesis that vaccines are correlated with an increased risk of autism always come up empty. That’s because vaccines don’t cause autism. Truth be told, I used to hedge a bit when I said that for the simple reason that it’s impossible ever to completely prove a negative, but over the 12 years I’ve been doing this, I’ve covered more studies than I can remember testing this very hypothesis, and a clear pattern has emerged. The best studies looking at whether vaccines increase the risk of autism have all been negative, and only poorly designed, poorly executed, and poorly analyzed studies by “scientists” associated with the antivaccine movement tend to show anything resembling a positive result—and even many of these are negative. So, barring highly compelling new evidence, I just say it directly: Vaccines don’t cause autism. In any case, antivaxers detest, disparage, and otherwise denigrate medical academia because it doesn’t support their delusion that vaccines are harmful and cause autism. Although there are fewer studies looking at other disorders, ranging from the mild to as severe as sudden infant death syndrome, those studies, by and large, come up empty too.

On the other hand, antivaxers are desperate for validation. They crave any evidence that real scientists take them seriously or, even better, have produced evidence that supports their delusion that vaccines cause autism (or any of the other disorders, conditions, and diseases attributed to vaccines by them). That’s a good explanation for an article by President Donald Trump’s new BFF, antivaccine activist Robert F. Kennedy, Jr., “Yale University Study Shows Association Between Vaccines and Brain Disorders.” Seemingly having his delusions validated by Yale (and hardly mentions Penn State, the other university that contributed to the study), RFK Jr. is practically giddy with the validation of Yale, man:

A team of researchers from the Yale School of Medicine and Penn State College of Medicine have found a disturbing association between the timing of vaccines and the onset of certain brain disorders in a subset of children.

Analyzing five years’ worth of private health insurance data on children ages 6-15, these scientists found that young people vaccinated in the previous three to 12 months were significantly more likely to be diagnosed with certain neuropsychiatric disorders than their non-vaccinated counterparts.

This new study, which raises important questions about whether over-vaccination may be triggering immune and neurological damage in a subset of vulnerable children (something parents of children with autism have been saying for years), was published in the peer-reviewed journal Frontiers in Psychiatry, Jan. 19.

Hmmm. Frontiers in Psychiatry? I’ve encountered Frontiers journals before. Suffice to say, I have not been impressed. Other Frontiers journals, for instance, have shown an unfortunate susceptibility to antivaccine pseudoscience. Be that as it may, my skeptical antennae always start to twitch whenever I see someone like RFK Jr. exulting over a study. Let’s just say it’s a long history of seeing the sort of execrable “science” he routinely embraces, as long as it supports his belief that mercury in vaccines causes autism, even though the mercury-containing thimerosal preservative that contains mercury was removed from childhood vaccines 15 years ago.

Before I examine the study itself, which, not surprisingly, is nowhere near as convincing as RFK Jr. portrays it (to put it mildly, given how awful the study is), let’s see what RFK Jr. thinks of it:

More than 95,000 children in the database that were analyzed had one of seven neuropsychiatric disorders: anorexia nervosa, anxiety disorder, attention deficit and hyperactivity disorder (ADHD), bipolar disorder, major depression, obsessive-compulsive disorder (OCD) and tic disorder.

Children with these disorders were compared to children without neuropsychiatric disorders, as well as to children with two other conditions that could not possibly be related to vaccination: open wounds and broken bones.

This was a well-designed, tightly controlled study. Control subjects without brain disorders were matched with the subjects by age, geographic location and gender.

As expected, broken bones and open wounds showed no significant association with vaccinations.

New cases of major depression, bipolar disorder or ADHD also showed no significant association with vaccinations.

I could tell from RFK Jr.’s description that there was very likely a confounder or something else that could explain the results of this study as likely being spurious or not real. In actuality, there are a number of issues with the study that make it far much less of a slam dunk than RFK Jr. and other antivaccinationists seem to think that it is. If you don’t believe me, let’s head on over to the original study by Leslie et al, “Temporal Association of Certain Neuropsychiatric Disorders Following Vaccination of Children and Adolescents: A Pilot Case–Control Study“, and take a look. It’s open access; so you can all read as much (or as little) of the study as you like. On to the study!

Do we really need to do another study?

My first question, before I even started to read the paper, was: Why was this study necessary? The answer is quite simple. It wasn’t. There’s already copious evidence that vaccines are not associated with autism or other neurodevelopmental disorders. For instance, a large and far better study in 2007 quite emphatically did not support a causal relationship between vaccines and neurodevelopmental disorders other than autism, while the followup study to that in 2010 just as emphatically did not support a potential causal relationship between vaccines and autism, as many others did not. Both studies were far better designed than this one. So how do the authors justify doing yet another study to study what’s been studied ad nauseam with negative results? I’m going to use a longer quote than usual because it’s important:

In light of the role of the immune system in these central nervous system (CNS) conditions, the impact of vaccines on childhood-onset neuropsychiatric diseases had been considered and was mainly addressed with regards to the administration of the measles, mumps, and rubella (MMR) vaccine (and its various components) and the subsequent development of autism spectrum disorder (ASD). Although the controversy over MMR vaccination and ASD still exists for some members of the public, this association has been convincingly disproven (9, 10). On the other hand, the onset of a limited number of autoimmune and inflammatory disorders affecting the CNS has been found to be temporally associated with the antecedent administration of various vaccines (11). These disorders include idiopathic thrombocytopenic purpura, acute disseminated encephalomyelitis, and Guillain–Barré syndrome among others (12–16). More recently, data have emerged indicating an association between the administration of the H1N1 influenza vaccine containing the AS03 adjuvant and the subsequent new onset of narcolepsy in several northern European countries (17, 18). The immune mechanisms and host factors underlying these associations have not been identified or fully characterized, although preliminary data are beginning to emerge (18–23).

Given this growing body of evidence of immunological involvement in CNS conditions, and despite the controversy concerning the link between ASD and MMR and the clear public health importance of vaccinations, we hypothesized that some vaccines could have an impact in a subset of susceptible individuals and aimed to investigate whether there is a temporal association between the antecedent administration of vaccines and the onset of several neuropsychiatric disorders, including OCD, AN, tic disorder, anxiety disorder, ADHD, major depressive disorder, and bipolar disorder using a case–control population-based pediatric sample (children aged 6–15 years). To assess the specificity of any statistical associations, we also determined whether or not there were any temporal associations between antecedent vaccine administration and the occurrence of broken bones or open wounds.

One can’t help but note that the disorders listed that occur in the CNS after vaccines are actually quite rare, particularly Guillain-Barré. The authors of the article referenced found only 71 cases between 1979 and 2013. That’s 71 cases out of billions of doses of vaccines administered over 34 years. Remember, what this paper is claiming to look at is not serious demyelinating reactions to vaccines, which are very rare, but the reaction between vaccination and common conditions, like compulsive disorder (OCD), anorexia nervosa (AN), anxiety disorder, chronic tic disorder, attention deficit hyperactivity disorder, major depressive disorder, and bipolar disorder. One notes that the authors didn’t look at autism, and they didn’t really explain why, other than to note that the vaccine-autism link has been refuted by multiple studies, to which I respond: Then autism would have made an excellent negative control, to check the validity of the model, now, wouldn’t it? One also notes that antivaxers flogging this paper are annoyed that the authors didn’t look at autism, even though the idea that vaccines cause autism is the central myth of the antivaccine movement.

As for using the association of the H1N1 influenza vaccine with narcolepsy as a justification, it’s important to note that this is a strange case. The association was only observed in specific countries and not in others in which the vaccine does not appear to be a consistent or unique risk factor for narcolepsy in these populations. Overall, it was a confusing set of data to derive any clear picture of whether the H1N1 vaccine was a true risk factor. On the other hand, there are data suggesting that Pandemrix triggers antibodies that can also bind to a receptor in brain cells that help regulate sleepiness in genetically susceptible people. Be that as it may, the result with narcolepsy is nonetheless thin gruel to justify a study like this.

Come to think of it, so are all the other justifications listed.

A truly remarkably bad study

But what about the study itself? Basically, it’s a case control study. As you recall, a case control study is a form of epidemiological study that looks at risk factors in those who are diagnosed with a condition (cases) compared to those who are not (controls). For instance, a case control study might find that people with lung cancer (cases) are far more likely to have a significant smoking history than those without (controls). One thing about a case control study is that the selection of controls is critical, as it is impossible to completely randomize. Thus, the controls must be chosen to be as similar as possible to the cases in everything other than the condition being examined. This is not as easy as it sounds.

This particular case control study used data from 2002 to 2007 from the MarketScan® Commercial Claims and Encounters database, which is constructed and maintained by Truven Health Analytics. MarketScan consists of de-identified reimbursed health-care claims for employees, retirees, and their dependents of over 250 medium and large employers and health plans. Individuals included in the database are covered under private insurance plans; no Medicaid or Medicare data are included. The database includes claims information describing the health-care experiences for approximately 56 million covered patients per year and is divided into subsections, including inpatient claims, outpatient claims, outpatient prescription drug claims, and enrollment information. Claims data in each of the subsections contain a unique patient identifier and information on patient age, gender, geographic location (including state and three-digit zip code), and type of health plan.

You can see one thing right away. This is a select population, only patients with private insurance belonging to these plans. Another issue is that this is what we in the biz call administrative data. Administrative data are data collected not for research purposes, but for administrative purposes, primarily registration, transaction and record keeping, usually during the delivery of a service. In this case, the authors used a health insurance administrative database. That means it’s just diagnoses, procedures and interventions, some demographic data, and, above all, billing codes. Now, there are advantages and disadvantages to using administrative data. On the one hand, administrative data allow for huge numbers, is unobtrusive given that these are data that are collected anyway, and can uncover information that a study subject might not provide in an interview. However, there are many disadvantages as well, and they’re not small. One very common drawback to using administrative data is that a lot of potentially relevant clinical and demographic data aren’t captured. In other words, the data are restricted to just the data needed to administrative purposes, and therefore the amount of data and the definitions of conditions are often insufficiently granular. Another problem with administrative data is that there’s no way of knowing if individuals with a given diagnosis have a severe or mild form of the condition coded for, particularly with ICD9 data, which is what was used because ICD10 coding data was only mandated a year and a half ago. Sometimes there are coding games that are played to maximize reimbursement as well. So it’s difficult to tell whether the subjects in the database have, according to strict diagnostic criteria, the diagnoses attributed to them, or how severely they are affected. It’s thus not surprising that the use of administrative data can frequently provide an incorrect estimate for various conditions and risk factors, as has been described for sickle cell disease, where administrative data grossly underestimated the rate of transfusion. (Discussions of the advantages and disadvantages of using administrative data can be found here and here.)

Now here’s how the authors did the study:

The study sample consisted of children aged 6–15 with a diagnosis of one of the following conditions (ICD-9 codes in parentheses): OCD (300.3), AN (307.1), anxiety disorder (300.0–300.2), tic disorder (307.20 or 307.22), ADHD (314), major depression (296.2–296.3), and bipolar disorder (296.0–296.2, 296.4–296.8). To test the specificity of the models, we also included children with broken bones (800–829) and open wounds (870–897). To identify new cases, we further limited the sample in each diagnostic group to children who were continuously enrolled for at least 1 year prior to their first diagnosis for the condition (the index date). Next, a matched one-to-one control group was constructed for each diagnostic group consisting of children who did not have the condition of interest and were matched with their corresponding case on age, gender, date of the start of continuous enrollment, and three-digit zip code. Because vaccines tend to occur during certain times of year (such as before summer camps or the beginning of school), controls were also required to have an outpatient visit at which they did not receive a vaccine within 15 days of the date that the corresponding case was first diagnosed with the condition, in an effort to control for seasonality. The date of this visit was the index date for children in the control group.

Can you see some problems already? First, in a case control study it is often desirable to use more controls than cases; that wasn’t done here. That isn’t a horrible flaw, just one that I question given that one of the key advantages of an administrative database is large numbers of subjects. More important is how few descriptors were used to match the controls: age, gender, date of insurance, and zip code. I’ll be honest and say that I’m not sure if the way they tried to control for seasonality of vaccine administration is valid or not. I will for now assume it was, as that’s not necessary for my conclusion that this paper is pretty crappy.

Now let’s look at how the analysis was done:

The analyses were performed for each diagnostic group (and their controls) separately. Children with multiple conditions (e.g., ADHD and tic disorder) were included in each of the corresponding analytic groups. First, the proportion of children who were exposed to vaccines in the period before the index date was compared across the case and control groups. Next, bivariate conditional logistic regression models were estimated to determine the hazard ratios (HRs) and 95% confidence intervals (95% CIs) associated with the effect of vaccine exposure on having the condition of interest. Separate models were run for the 3-, 6-, and 12-month periods preceding the index date for each diagnostic group.

This leads to the results as reported:

Subjects with newly diagnosed AN were more likely than controls to have had any vaccination in the previous 3 months [hazard ratio (HR) 1.80, 95% confidence interval 1.21–2.68]. Influenza vaccinations during the prior 3, 6, and 12 months were also associated with incident diagnoses of AN, OCD, and an anxiety disorder. Several other associations were also significant with HRs greater than 1.40 (hepatitis A with OCD and AN; hepatitis B with AN; and meningitis with AN and chronic tic disorder).

So the authors found some associations, to which I say: Whoopee. RFK Jr. characterizes this paper as a “well-designed, tightly controlled study.” No it wasn’t. Not at all. RFK Jr. wouldn’t know a well-designed study if it bit him in the posterior. The only reason he thinks this study is “well designed” and “tightly controlled” is because it provides results that he likes. In fact, what the authors did is the same thing authors of a recent acupuncture study I noted did: p-hacking. They did a whole bunch of comparisons using a nominal p<0.05 and don’t correct for multiple comparisons. In other words, it’s almost certainly statistical noise, given that most of the associations are modest and that the associations are all over the place.

Check out Table 2. It is the very definition of a p-hacking table. All the bold results are “statistically significant.” Just peruse the table. You don’t even have to look that closely to see that the receipt of any vaccine within 3, 6, and 12 months is correlated, both negatively and positively, with almost every condition examined, including broken bones and open wounds. (Yes, if you accept this study’s results you have to conclude just as much that vaccines are a risk factor for broken bones! A modest one, true, but a risk factor nonetheless. Yes, that’s sarcasm.) You can look down the list at individual vaccines and see that the influenza vaccine is associated with almost as many conditions. In fact, one thing the authors mention in passing is that administration of any vaccine seems to modestly decrease the risk of major depression and bipolar disorder. Again, if you accept that vaccines increase the risk of, say, anorexia nervosa, there’s no a priori reason not to accept the result that they also decrease the risk of major depression and bipolar disorder. Either that, or you have to accept that what you’re looking at is statistical noise, which is the far more likely explanation.

Beyond p-hacking: Bias

There also seems to be a significant bias in the results, given that more of the associations were positive than negative and given how many of them there were. One problem with a study of this type is that it can’t control for health-seeking behavior very well, if at all, and no attempt was even made here. For instance, parents who seek regular conventional medical care for their children would be more likely to keep them up to date on their vaccines. Those same parents would be more likely to have their children seen regularly enough to detect the neurological and psychiatric conditions associated with vaccines in the study. We see this confounder in studies of autism epidemiology all the time. Parents who take their children to physicians more often are more likely to have a child diagnosed with autistic disorder because screening always turns up more cases of a disorder. It could also explain why broken bones and open wounds correlate with vaccination. It’s not because vaccines cause these conditions, but rather because, for example, parents who don’t regularly take their children to the doctor might be less likely to vaccinate and less likely to take them to the emergency room or doctor’s office for cuts that might need a couple of stitches. Ditto for fractures that might not be so clinically apparent, such as a “greenstick” fracture, which is easily mistaken for a sprain and can heal on its own. I must emphasize again that administrative data often don’t give an indication of the severity of a condition, particularly given that this study used ICD-9 data, which are far less detailed than ICD-10 data given that ICD-10 coding was only made mandatory less than a year and a half ago.

Here’s another way health-seeking behavior could influence results. According to the CDC recommendations, everyone should get a flu shot every year. Consider this hypothetical situation: A child’s parents take him in for a flu shot. During that visit, the doctor or nurse notices behaviors that might indicate OCD, tics, or another neurological condition, and refers the child to a neurologist or a mental health professional, as appropriate. That takes a while to set up. Then maybe the workup takes more than one appointment and/or a series of tests. Three to twelve years after the visit for the flu shot, you the child has one of the diagnoses that appeared to be correlated with the flu vaccine, even though there is no causation at all.

I’d be willing to bet that all of the associations discovered are spurious, the result of a combination of not controlling for multiple comparisons and not controlling for a very big confounder, health-seeking behavior.

But it gets curiouser and curiouser

Unfortunately, I haven’t described all the problems with the study yet. As they say, but wait, there’s more! Let’s take a look at the funding sources:

This research was funded by donations from RK, BR, and Linda Richmand.

Now this is bizarre in the extreme, all but one of the authors appears unqualified to be undertaking a study of this nature, and even the pediatrician is not an epidemiologist or statistician. For example, one of the authors, Douglas L. Leslie, is a health economist. Two of the other authors (Robert Kobre and Brian Richmand) self-funded the work. Linda Richmand, who also funded the work, is almost certainly Brian Richmand’s wife. This is all very strange until you realize that, like Leslie, Brian Richmand is also not an epidemiologist, physician, scientist, or health researcher. He is a lawyer. Oddly enough, on his Stanford Law School page, there is this blurb:

Although Brian’s formal training has been largely in law and finance, he is most proud of his scientific work in a pediatric autoimmune disorder commonly known as PANDAS. Brian has been instrumental in designing and organizing 5 completed and ongoing immunology research studies at Yale Medical School, Oklahoma University Health Sciences Center, and Schneider Children’s Medical Center (Tel Aviv, Israel). Brian has also authored and co-authorized multiple papers published in peer-reviewed medical journals.

PANDAS. It had to be PANDAs. And guess what? Out of the four publications he authored or co-authored that I could find on PubMed—surprise! surprise!—one of them is a paper promoting the vaccine-autism link! Guess what journal it appeared in? If you guessed Medical Hypotheses, you’ve learned much. A certain friend of this blog even wrote about his awful paper when it came out!

It gets worse though. Robert Kobre is not a scientist either. He is the Managing Director at Credit Suisse Securities (USA) LLC and also chairman of the board of directors of the Global Lyme Alliance, which from its website appears to be very much into chronic Lyme disease woo. One wonders whether that colors his views of vaccines, one does.

So how is it that a lawyer from Stanford and an investment banker at Credit Suisse are listed as being affiliated with the Yale Child Study Center when neither of their names appear in an online list of faculty there? Of the other authors, James F. Leckman, is Yale Faculty, and Selin Aktan Guloksuz appears not to be faculty but could well be a student or postdoc. However, as far as I can tell, neither Richmand nor Kobre are formally affiliated with Yale. Inquiring minds want to know this can be. Yes, precious, they do. In particular those inquiring minds want to know why an academic pediatrician as distinguished as Dr. Leckman allowed his name to be put on such a shoddy paper. Actually, on second thought, check out the contribution section:

DL, RK, BR, and JL designed the study and wrote the protocol. SG commented on the protocol. DL undertook the statistical analysis. BR, DL, and JL wrote the first draft of the manuscript. All the authors commented on the manuscript. All the authors contributed to and have approved the final manuscript.

So Dr. Leckman was involved in designing the study but the health economist (Leslie) alone did the statistical analysis. This is a great example of why it is mandatory for a statistician to be involved in the design of an epidemiological study like this from the very beginning and to do the statistical analysis. I don’t see anywhere how any of the authors of this article were qualified to design and analyze a case control study like this, and it shows. It’s possible that Selin Aktan Guloksuz could be an epidemiologist, but in reality I’m having trouble finding much about Guloksuz other than publication lists. Nor do the reviewers look particularly qualified to review a paper like this. They’re all psychiatrists and all from Indian universities I’ve never heard of. From my perspective, any epidemiological study needs to be reviewed by an epidemiologist or a statistician, preferably both.

The wrap-up: No, this study isn’t evidence mandating further investigation into vaccines and autism

There are so many dodgy things about this paper that I could continue to go on, but for purposes of a wrap-up, what you need to know is that, no, it doesn’t show that vaccines cause anorexia nervosa or tics, or the other neurological disorders linked to them; that it isn’t even good evidence of a correlation between vaccines and these conditions; that it’s funded by two of the authors and the wife of one of the authors; that one of the authors has a history of writing antivaccine articles for Medical Hypotheses; and, finally, that the other author is chairman of the board of directors for a Lyme disease charity that appears to be heavily into chronic Lyme disease woo. Basically, it’s bad epidemiology and statistics carried out by mostly non-epidemiologists and non-statisticians. Indeed, it’s so bad that I was surprised not to see someone like Andrew Wakefield, Mark Geier, or Christopher Shaw associated with it. How something this bad could be published by Yale faculty (plus non-Yale faculty listed as affiliated with Yale) is beyond me. On the other hand, Yale is very big into “integrative medicine,” which could easily be producing the same pernicious effect there that it’s caused at the Cleveland Clinic.

Whatever the case, when I decided to look at this paper, I hit the jackpot in terms of—shall we say?—teaching opportunities in critical thinking. Thanks, RFK, Jr.!

Shares

Author

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.