Engaging on social media about health issues can be daunting. I know this is not news, but it is important to understand what is happening phenomenologically. I also think it’s a microcosm of what’s happening to our society in general because of social media – we no longer have a shared fact-based reality as people increasingly live in different universes of information.
There is, of course, also good information on social media, from academic, expert, and professional sources, for example, but that is not the center of gravity. I think one way to look at the effect of social media (an extension of mass media) is that it has amplified narrative-based thinking at the expense of fact-based thinking.
Here are two examples that reflect different aspects of this phenomenon. First, here is a TikTok video where the creator claims that cancer is not bad and doesn’t kill people, it’s just a reaction of your body to try to isolate toxins, so don’t interfere with it and let it do its thing. What kills people is cancer treatment (chemo, radiation, and surgery), not the cancer itself. This narrative will be familiar to regular readers here – it is a standard alternative medicine bit of nonsense.
This position is presented as if it is simply wisdom and truth, the creator’s “opinion” supported by nothing but naked assertion. Usually, they will throw in some anecdotes, but you will never get an analysis of actual scientific information. All of this is wrapped in a giant conspiracy theory, which is just assumed to be true, and is part of the narrative. “They” make money treating cancer, they are corrupt, and everything they say is a lie.
Another frequent part of the narrative is that “they” don’t look for root causes of disease. Of course, the presenter just knows what the true causes are, always something lifestyle-based (not that lifestyle factors aren’t one of many factors). This blames the patient for their disease, puts the burden entirely on them, and then advises them to go against all evidence. The comments then fill with people telling their own anecdotes, occasionally with conflicting but equally nonsensical narratives (wait, I thought all cancer was caused by parasites).
The entire format of social media lends itself to this narrative-based approach to reality. Information is shared often as short, punchy videos, optimized for their emotional impact. The only real feedback loop is – what drives clicks and engagement. The comments are no place for substantive debate, especially on platforms that allow very limited comment length. At best you get into dueling citations. Moderating the comments is a bit of a catch-22 – if you lightly moderate the comments, the trolls take over, and if you heavily moderate then the comments become an echochamber of one perspective. Either way, we again lose the shared reality of a common source of reliable information.
A second example shows how far down the rabbit hole people can go. Anti-vaccine commenters have “infiltrated” (the terminology we use almost unavoidably reflects the echochamber phenomenon) an unrelated post on my other blog. I will sometimes engage to see if the person has any ability to look at information objectively, but also mostly for the benefit of onlookers.
This commenter takes a slightly different approach than the “cancer is just fine” presenter. Of course, they are steeped in the same conspiracy narrative – they immediately begin with smug pronouncements that of course anyone who disagrees with them is not looking at the evidence, is spreading propaganda, and/or is a shill. The difference is, they do think the evidence is on their side, and they have the citations to back it up.
The problem is their process. They are engaged in pseudoscience more than anti-science – they somehow became convinced of the anti-vaccine narrative, and then confirmation bias kicks in to an epic degree. They seek out information that supports their narrative, and summarily reject information that contradicts it. Then they accuse anyone who disagrees with them of doing just that. (I believe David has dubbed this the – “I know you are but what am I” gambit.)
Therefore, the problem is not that they not evidence based, it’s that they are defending their position systematically with the worst evidence available. They also focus highly on the biases in the system, while being completely unaware that there is an anti-vaccine movement (they deny they are anti-vaccine – they are pro health and safety) curating and even creating low-grade evidence to support their narrative.
There is no simple way to review all the relevant evidence. Just search here for the hundreds and hundreds of articles on vaccines and anti-vaccine propaganda. But let me give you one example to show you how bad it gets. They repeated the false claim that vaccines cause SIDS. They used AI to find and cited a few terrible studies, the first one being:
“1. Vaccinated vs. Unvaccinated: A study published in the journal “Frontiers in Pediatrics” found that unvaccinated children had a significantly lower risk of SIDS compared to those who received vaccinations. This study, titled “Infant Mortality Rates Revisited – Is There a Role for Vaccines?” suggests that the risk of SIDS is 5-10 times higher in infants who receive vaccinations.”
The problem with this study is that it appears not to exist. I searched for the exact title in Google, PubMed, and the Frontiers in Pediatrics journal with nothing found. This is not unusual as LLMs are known to hallucinate and even make up entirely fake journal references (remember the first MAHA report also contained citations to non-existent studies). So if you use an LLM to help with your searches and research, you have to confirm whatever information it produces. It is also clear that LLMs have a tendency to backup whatever narrative the user is operating under.
When several commenters pointed this out and challenged him to provide a link, the response was amazing:
“As far as the link goes for the 58% of SIDS deaths happening within 48 hours after vaccination, AI has the information available to it, however the site is not available. You knew this, which is why you asked for the link. So you can say that I am a conspiracy nut. Do you have any idea why AI has access to the information on this site and the general public does not? “
So – AI has secret access to information that is otherwise not obtainable on the internet, because of a conspiracy to hide that information? First off, no. LLMs are not trained on double-secret information otherwise not accessible online. And no, there is no conspiracy to suppress or hide information (which contradicts his narrative that he can find this information online).
This may, unfortunately, be reflecting a dangerous trend – just trusting LLMs as if they are some unassailable oracles. This might set off a new round of information wars, like what happened with Wikipedia and also fact-check sites. Any important source of apparently objective information becomes a war zone for those who want to control information and promote their narrative. Good-faith efforts at quality control are then portrayed as just another competing narrative.
This is what is ultimately lost – the ability to objectively vet and analyze information to arrive at a reliable conclusion. That is mostly what we discuss here at SBM, and what SBM is – a process for evaluating information. Without some kind of quality-based objective process, we just have dueling narratives. And the narratives which drive engagement and are psychologically appealing will tend to win out – not facts, logic, and reason. I would argue that this captures where we are generally as a society. We need to find a way out, or far worse awaits us.
Health Narratives on Social Media