This is perhaps the most “meta” post I have written for SBM. One core approach of scientific skepticism, of which SBM is part, is to use metacognition – thinking about thinking – to explore information and our understanding of that information. Sometimes, however, we need to think about how we think about thinking.
Specifically, one of our goals is to directly counter misinformation about science and medicine, misinformation that is often the product of marketing campaigns funded by multi-billion dollar industries. We definitely perceive this as a David and Goliath situation, and probably spend too much time debating about whether or not we are having a significant effect. Regardless, we are concerned with being as effective and efficient in our countering of misinformation as possible.
Fortunately, there is science to guide our promotion of science. How does misinformation change attitudes and how well do various strategies to counter misinformation work? A new study published in PLOS One (“Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence“) explores this issue further and supports the basic strategy that we generally take at SBM.
There are two experiments in this study by Cook, Lewandowsky, and Ecker. The first is actually quite negative. They used as their subject matter the acceptance of anthropogenic global warming (AGW). The exact topic probably doesn’t matter and the principles should apply to medical topics as well. They exposed subjects to false balance about AGW, implying that there is controversy and no real consensus on whether or not AGW is real. In one group they just presented the misinformation and false balance. In a second group they first exposed them to a statement about the fact that there is a strong consensus for AGW. In a third group they first exposed subjects to information about the misleading effects of false balance, and in a final group they exposed subjects to both the consensus information and the false balance “inoculation” information. A control group was presented with no misinformation.
They also surveyed the subjects regarding their acceptance of free market values and various attitudes regarding AGW. After exposure to all the information in the various groups they surveyed them again regarding AGW. Overall the results were negative. There was no statistically significant effect on acceptance of AGW, attributing warming to human activity, for policies to address AGW, on trust in climate scientists, or trust in contrarian scientists.
There was a small but statistical effect on the perceive consensus. This is a weak effect – essentially telling people there is a strong consensus slightly countered later implying that there isn’t a strong consensus. Prior studies in which subjects were exposed to direct information showing a lack of consensus did not show this effect.
Attitudes predictably corresponded to free market values, again showing that ideology is the main determining factor with respect to accepting the science of AGW. Further, information about AGW had a polarizing effect, with strong free market supporters showing a “backfire” effect and rejecting AGW even more.
All of this is pretty disheartening. Misinformation + correcting information mainly had a polarizing effect on the attitudes of subjects, predicted by their free market ideology. I was not impressed by the small inoculating effect that was seen regarding the consensus specifically because the misinformation was only implied and not direct.
Fortunately there was a second experiment. This had a similar design, except that there were four groups, a control group with no intervention, a misinformation only group, an inoculation group, and an inoculation + misinformation. More importantly, the inoculation took a different form. Rather than just telling people correct information, or what false balance is, they explained to them how fake experts can be used to deliberately create the impression of false balance. In other words, they exposed the deceptive tactics used to sow distrust in the scientific consensus.
In this experiment also misinformation had a negative and polarizing effect on acceptance of AGW. However, in the group first inoculated with information about this deceptive tactic, the polarizing effects of misinformation were completely neutralized.
What does this mean?
I freely admit my bias here, because this study seems to support what I have been saying for more than two decades, and essentially my entire approach to science communication. At least when it comes to controversial topics, you can’t just give people information. You have to teach them metacognition – how to think about how they think about things. You need to teach them about cognitive biases, mechanisms of deceptive, logical fallacies, and the deceptive tactics of pseudoscience.
That is the niche we fill. There are plenty of outlets out there providing medical information. Mainstream medical academic groups do a fine job of analyzing, synthesizing, and communicating medical science about non-controversial topics. They do a generally horrible job, however, when it comes to pseudoscience, fraud, and misinformation.
This is because pseudoscience is a specialty unto itself – it is an expertise and knowledge base we possess that mainstream outlets lack. We specifically analyze the logic and tactics of those promoting pseudoscience in medicine and try to “inoculate” our readers by explaining how the tactics work and why their logic is flawed.
For example, it is not enough to hear the claim that the evidence shows acupuncture and homeopathy don’t work. It is more important to understand that proponents of acupuncture fudge their definition to make negative studies seem positive, or muddy the waters with mixed variables, such as “electroacupuncture.” It further helps to understand how proponents of both try to use pragmatic studies (not blinded and placebo controlled) to make efficacy claims, and why this is not valid (all of which were discussed by Dr. Gorski on Monday). We need to expose the strategy of presenting placebo effects as if they are real effects that support a specific intervention (they don’t).
We have done our job if our readers learn how to see a study and figure out for themselves why it doesn’t support the claims the authors make for it.
It’s nice to have some objective evidence that this strategy can actually work.