Medicine is impossible. Really. The amount of information that flows out the interwebs is amazing and the time to absorb it is comparatively tiny.

If you work, sleep and have a family, once those responsibilities are complete there is remarkably little time to keep up with the primary literature. I have made two of my hobbies (blogging and podcasting) dovetail with my professional need to keep up to date, but most health care providers lack the DSM-4 diagnoses to consistently keep up.

So we all rely on short cuts. People rely on me to put new infectious disease information into context and there are those I rely upon to help me understand information both in my specialty and in fields that are unrelated to ID.

Up and down the medical hierarchies we trust that others are doing their best to understand the too numerous to count aspects of medicine that no single person could ever comprehend.

If I want to know about the state of the art on the treatment of atypical mycobacterium or how best to treat Waldenströms or who knows the most about diagnosing sarcoid, there is always someone who can distill their expertise on a topic to the benefit of the patient and my knowledge.

Trusting others is the biggest shortcut we routinely take in medicine to wade through the Brogdignagian amounts of information that flood into medical practice. We have to trust other clinicians, the researchers and the journals that all the information is gathered and interpreted honestly and accurately.

I understand that the world is a tricky and confusing place and that even under the best of circumstances the literature has ample opportunity to be wrong. But in the end the truth, or some approximation of it, will out.

Trust is a fragile foundation upon which to build an edifice, but the practice of medicine would be impossible without it. It is one of the reasons medical fraud is particularly heinous; it strikes to the heart of the practice of medicine.

One of the other shortcuts we use is statistics. It is a quick and dirty way to check the validity of results and there is nothing like a good p-value to make a result believable.  The smaller the p-value, the better.  That is a simple, and sometimes misleading, approach.  Except for ID and SCAM’s, I rarely have the luxury of time to read a study closely, so I look at the p-value instead.

I have long had a mental block with statistics. I took, and dropped, statistics once a year for 4 years in college. Once they got past the bell shaped curve and the coin flip they would lose me. So I have to trust that the statistics are correct when I read a paper, and that bothers me. I would feel better if I knew that at one time I had been able to crank out the results with a pencil and a piece of paper.

Otherwise statistics are like the old New Yorker cartoon.

It was nice to be reminded for the umpteenth time that statistics can be tricky, with an article in Vaccine called 5 ways statistics can fool you—Tips for practicing clinicians. I write this, and my other blog, to first educate and entertain myself. As a side effect I hope to educate and entertain others, but I long ago realized that not everyone shares my aesthetic. There are those of you reading this for which the 5 ways are old hat, part of your critical thinking skills. For me it is yet another attempt to understand statistical concepts with the depth that I have with MRSA, rather than, say, the loop of Henle, the understanding of the latter lasts as long as I am reading about it. If that. I still suspect the loop has as much validity as homeopathy.

Their opening statement is a master of understatement:

However, compounding the problem of finding and effectively using the medical literature is the fact that many, if not most, physicians lack core skills in epidemiology and statistics to allow them to properly and efficiently evaluate new research. This may limit their abilities to provide the best evidence-based care to patients.

No kidding. And this article is meant to be applied to reality/science based treatment. As I mentioned in my last blog entry, it is even more problematic when statistics are applied to fantasy interventions like acupuncture or homeopathy. Then, not only do most physicians lack the core competencies to evaluate the paper, most are not able to recognize the subtle biases that allow magic to perceived as real.

It is like having real scientists evaluate ESP, where a magician would be a better qualified observer. I suspect that many editors and reviewers of SCAM papers are untrained in the skills required to evaluate SCAM research.  They apply the rules of science where those rules do not apply.

The tips, with examples from the vaccine literature, are

Tip #1: statistical significance does not equate to clinical significance

Tip#2: absolute risk rather than relative risk informs clinical significance

Tip#3: confidence intervals offer more information than p-values

Tip#4: beware multiple testing and the isolated significant p-value

Tip #5: absence of evidence is not evidence of absence

It is remarkable how these tips can be applied to SCAM-related articles and the articles are found wanting. For example, the acupuncture article I reviewed last week: barely significant p-values, (not true, thats why the line through), large confidence intervals and misleading multiple tests; the sine qua non of positive SCAM studies.

This, of course, is applying statistics to reality. Add a touch of prior plausibility and there is no reason to suspect that the barely positive effects noted in some SCAM studies are due to anything but bias and only bias. Since most of the interventions we discuss in this blog have zero probability of a real effect, there should be an alternative explanation for what appear to be beneficial outcomes. In the case of SCAM, absence of evidence is very likely evidence of absence.

Recently there were two other articles that looked at the effect of bias on outcomes in clinical trials. Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and non-blinded outcome assessors. and Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors

Almost the same article twice over.

In the two analyses they looked at the outcomes of clinical trials where the person determining the outcome, for example, the number of wrinkles after a therapy, are blinded or not as to the therapy. No surprise, when people know what the intervention is they assessed it as more effective than when they were blinded.

Nonblinded assessors of subjective measurement scale outcomes in randomized clinical trials tended to generate substantially biased effect sizes.

And this is for relatively tightly controlled clinical trials. In real-world practice the tendency to over estimate the effect of a therapy must be even greater. It shows up most often in the phrase “I use X on my patients and that is how I know it works.”

The strong effect of bias in determining that you see what you want to see, regardless of what is actually there, is a common human characteristic.  To be blind to that characteristic is a common Dunning-Kruger variant of all true believers.

Statistics makes my brain hurt. But if I remember some basic principals it makes understanding the medical literature a bit more comprehensible.

Posted by Mark Crislip

Mark Crislip, MD has been a practicing Infectious Disease specialist in Portland, Oregon, since 1990. He is a founder and  the President of the Society for Science-Based Medicine where he blogs under the name sbmsdictator. He has been voted a US News and World Report best US doctor, best ID doctor in Portland Magazine multiple times, has multiple teaching awards and, most importantly,  the ‘Attending Most Likely To Tell It Like It Is’ by the medical residents at his hospital. His growing multi-media empire can be found at