All scientists should be skeptics. Serious problems arise when a less-than-skeptical approach is taking to the task of discovery. Typically the result is flawed science, and for those significantly lacking in skepticism this can descend to pseudoscience and crankery. With the applied sciences, such as the clinical sciences of medicine and mental therapy, there are potentially immediate and practical implications as well.
Clinical decision making is not easy, and is subject to a wide range of fallacies and cognitive pitfalls. Clinicians can make the kinds of mental errors that we all make in our everyday lives, but with serious implications to the health of their patients. It is therefore especially important for clinicians to understand these pitfalls and avoid them – in other words, to be skeptics.
It is best to understand the clinical interaction as an investigation, at least in part. When evaluating a new patient, for example, there is a standard format to the “history of present illness,” past medical history, and the exam. But within this format the clinician is engaged in a scientific investigation, of sorts. Right from the beginning, when their patient tells them what problem they are having, they should be generating hypotheses. Most of the history taking will actually be geared toward testing those diagnostic hypotheses.
For example, if a patient is being seen with a headache they may relate that the pain is most severe in the front of the head and extends down into the face. This could be a migraine headache, or a sinus headache, or less likely due to an underlying serious pathology. The physician may then ask if the patient has nasal congestion – this is not a random or screening question, but is designed specifically to test the hypothesis that the headache may be a sinus headache. During the exam the physician may also apply pressure to the sinuses to see if they are sensitive, and perhaps even this reproduces the patient’s headache.
This seems rather straightforward, but actually all of the potential pitfalls of scientific investigation are in play. For example, while it is necessary to ask specific questions to test diagnostic hypotheses, this can easily lead to confirmation bias. Let’s say a clinician asks patients with frontal headaches if they have nasal congestion, and 40% of them (to use a hypothetical figure just for illustration) say that they do. This might lead the clinician to conclude that this is an important finding, and that it confirms that many patients with frontal headaches probably have sinus headache. What we do not tend to do, however, is institute a control – in this case ask patients with posterior headaches if they have nasal congestion, or patient without headaches. Perhaps 40% of all patients report nasal congestion.
This fallacy is similar to the toupee fallacy – believing that one can always recognize a toupee. This may be due, however, to not being aware of (and therefore not counting) instances when you cannot recognize a toupee. In diagnostic terms, signs and symptoms may seem to correlate strongly with a certain clinical presentation, but that can be an illusion of confirmation bias; only looking for the correlation to confirm your suspicions.
This is exactly why we need controlled data to guide our clinical assessments. What the clinician really needs to know, for example, is what the predictive value of having sinus congestion is for someone who presents with a frontal headache.
Another way to look at this is that anecdotal experience can be misleading, while carefully collected scientific data is reliable and predictive. This not only applies to questions about which treatments are safe and effective, but to the application of those treatments to individual patients.
Clinicians are also prone to correlation fallacies – assuming causation from correlation and temporal sequence (post hoc ergo propter hoc). This may be especially true for apparently good responses to our treatments. If I give that patient a treatment for their headaches and the headache gets better, it is very tempting to take credit for the improvement. We need to recognize, however, that the improvement may have been entirely coincidental, or due to placebo effects. This has practical implications going forward. If the headache recurs, will the same treatment also be effective? If the treatment has side effects, are they worth it? There may also be diagnostic implications – if I gave the patient a migraine-specific treatment and they improved, does that mean they have migraines?
There are other aspects of gathering the patient’s medical history that require the constant application of skepticism. First, the history is being filtered through a person (usually the patient, but perhaps a family member or other caregiver). This means the history is subject to all the flaws of human memory. Memories can be confused, fused, and altered over time. Every time you remember something you are actually reconstructing and potentially changing the memory. This process is exacerbated with a medical history, and a patient may have told their history to many individuals. There is therefore a large potential for contamination. Every time an interviewer asks a question they are potentially contaminating the patient’s memory with the content of that question.
There are also phenomena known as source amnesia and truth amnesia – we are better at remembering facts than we are at remembering where they came from (the source) and whether or not they are even true. So patients may relate a lot of information about this prior history, workup, treatment, and diagnosis while being confused about where the information came from, or even if it is true or not.
Clinicians can also make the mistake of assuming that because a bit of patient history is written down in the chart it is more likely to be true. This is not necessarily the case, however. There is what we call “chart lore” – facts passed on from one clinician to the next as part of the history but never traced to the original source. Each person just assumes the last person checked it out before they wrote it down. I frequently see patients who are walking around with medical diagnoses they don’t have. One clinician told them they might have a diagnosis, or even that the chance that they have it is very low, or perhaps that they don’t have a diagnosis. They, however, leave with the impression that they do have the diagnosis, and they pass that on to the next clinician they see, who writes it down as part of their history.
Further, we need to recognize that each patient has their own narrative – the way in which they understand and perceive their own health and illness. This narrative then affects all the information they have about their illness – what they choose to reveal, what they don’t reveal, the timing of events, the chain of cause and effect, and what labels they accept. The clinician has to simultaneously understand this narrative and investigate what is objectively going on that has led to this narrative. The narrative is important because it will influence how we communicate to the patient and make treatment decisions with them.
But we should not confuse this narrative with reality. The job of the clinician is to be detached, to step back and try to look through the narrative, the chart lore, the logical fallacies, imperfect memory, and all the rest to figure out what is really going on. In other words, the clinician has to be skeptical – skeptical of every piece of information and how it is interpreted, and skeptical of themselves and their own biases and desires.
This, of course, is an idealized goal, something we need to continuously strive for but probably never attain. Perhaps the most pernicious aspect of the current infatuation with unconventional treatments is that they overtly promote gullibility rather than skepticism on the part of the clinician (and everyone else). This is partly why unscientific medicine cannot be “integrated” into science-based medicine – they are fundamentally philosophically incompatible. Practicing science-based medicine requires rigor and skepticism, while the “alternative” requires a surrender to gullibility and naivete.