Last week, I noted the publication of an article by John Ioannidis, Alangoya Tezel, and Reshma Jagsi that caught my interest in BMJ Open, the BMJ’s open-access journal. Titled, Overall and COVID-19-specific citation impact of highly visible COVID-19 media experts: bibliometric analysis. To boil the paper down to its essence, Ioannidis examined the citation impact in the scientific literature of “highly visible COVID-19 media experts” in the US, Denmark, Greece, and Switzerland and concluded that most were not highly cited overall and few had published much on COVID-19 in particular. It’s a terrible analysis for the simple reason that its premise is flawed to the point where the results are, in essence meaningless, as I will explain. However, I did see this article as a good launching off point, a “teachable moment” if you will, to discuss science communication in the age of the deadliest global pandemic in over a century. Ioannidis was once one of my scientific heroes but since COVID-19 hit has disabused me of any previous hero worship, likely forever, although, truth be told, I had intermittently been unimpressed with his takes dating back years. In any event, this paper, published in late October but only seemingly finding an audience on social media last week (which is how I became aware of it) illustrates a problem that all of us who try to communicate science and medicine to the public face.
The Carl Sagan Effect
To understand the flaw in the premise behind Ioannidis’ paper, I have to look back at a bit of history. There has long been a disconnect between those who actually do science and those who try to communicate science and medicine to the general public. The prototypical example that those of us who do science communication, either as a sideline (as I do) or professionally, is Carl Sagan. There was even a term for it: The Sagan effect, which is basically the perception that popular, famous scientists who engage in public discourse on TV and in the media—Sagan died too young just as the Internet was rising in the mid-1990s and long before social media consumed our information landscape—are not as good in science as those scientists who do not engage in science communication with the public. One example of the “Sagan effect” that is frequently discussed the failure of his nomination to the National Academy of Sciences (NAS) in 1991, despite his being backed by prominent Nobel Laureates and analyses of his academic output having been shown to compare favorably with those of some of the most productive contemporaneous NAS members.
Basically, historically, the vast majority of scientists have looked down their noses at scientists who try to engage with the public in popular media. I can cite my own personal experience over a blogging “career” that is approaching two decades. Many are the times over the years that I’ve been subjected to sarcastic little digs—and worse!—from scientists questioning why I bother. Indeed, when I first started blogging, I used a pseudonym. True, on my not-so-super-secret other blog, I still use that pseudonym (mainly because I like it), but my real name is right there in the “About” section. The reason back then was that I was worried about how my newfound hobby would be perceived by my bosses and fellow surgeon-scientists. When Steve Novella asked me to join this blog as one of the founding members after I had been blogging on my own for nearly three years, I jumped at the chance because (1) it was Steve Novella asking me and (2) I had lost my fear of writing under my real name and jumped at the chance. True, I kept my other blog going under the pseudonym, but my real name is so easy to find—it’s on the blog!—that I sometimes use it as an “intelligence test” for trolls who try to dismiss me as just an “anonymous blogger”. If they can’t find my real name in the “About” section of my blog, they’re either dishonest or stupid—or both.
It’s not just scientists, either. You might recall that a few months before the pandemic, a rising star in medical oncology, Dr. Vinay Prasad, launched a broadside against physician-skeptics who analyzed and debunked pseudoscience, quackery, and bad science for public consumption, and then doubled down on it almost exactly a year ago today. His contempt for the exercise of science communication in this sphere was palpable. Basically, his idea was that if we weren’t doing what he was doing (clinical trial analysis) we were wasting our intellects. I won’t belabor his criticisms and why they were so off base, as Steve and I both responded. I will also point out that much of Dr. Prasad’s criticism of science communication has been topic-specific. Ironically, he himself has built up a rather prolific public communication infrastructure of podcasts, social media, and legacy media appearances. Unfortunately, he has also devolved into spreading misinformation and accusing supporters of mask and vaccine mandates as starting us down the slippery-slope to Nazi-style fascism.
It is undoubtedly true that this dismissive attitude towards science communication has been changing over the 17(!) years that I have been blogging regularly, even more so in the last decade or so since the rise of social media platforms like Facebook, Twitter, Instagram, and, more recently, Tik Tok. Lots of scientists, particularly younger scientists, are engaging in science communication on social media, sometimes quite entertainingly, and the stigma against such activity—and, make no mistake, it has been a stigma—has been fading. However, Ioannidis demonstrates that it is not yet gone. I also can’t help but wonder if this entire exercise is a response to some of the harsh criticism he’s received for his numerous bad takes on the COVID-19 pandemic.
The Carl Sagan Effect, as a study by John Ioannidis
Ioannidis’s study is an example of the Carl Sagan effect manifested in the unwritten but obvious assumptions behind the study. In brief, Ioannidis seems to assume that if you are communicating science to the public through the media, you are not likely to be as good a scientist, with his study being, in essence, a plea for more “high-powered” scientists with lots of relevant publications and citations on COVID-19 to make themselves available to the media and for the media to recruit more such scientists to comment on COVID-19. There are…a lot of flaws. Indeed, Ioannidis’ bibliometrics study is pretty bad and seems to have an agenda behind it that readers might not be aware of if they are unaware of Ioannidis’ history since the pandemic started. Let’s circle back and see what I mean.
The COVID-19 pandemic has been accompanied by an unprecedented infodemic in the news and social media.1 2 Media coverage has been intensive, continuous, massive and heated and has involved a very large number of alleged experts. The involvement of knowledgeable scholars in the public discussion and dissemination of information on such a monumental crisis is clearly welcome and indispensable. However, how knowledgeable are the experts recruited by media?
Knowledge and expertise are difficult to appraise with full objectivity. Weinstein3 argued that there are two kinds of experts, those who are recognised as experts based on what they know (epistemic expertise) and those who are worthy of being called experts based on what they do (performative expertise). According to this classification, an epistemic expert is a person who is capable of providing strong justification for a range of claims in a domain, while performative expertise characterises a person who is able to perform a skill well according to the rules and virtues of a practice.3 Performative experts may not necessarily be contributors to the scientific literature themselves, but may still know their job well and have extensive practical experience. It is very difficult, however, to appraise in a standardised manner and with consistency and quantitative metrics such performative expertise. Conversely, epistemic experts are likely to be contributors to the scientific literature and their level of contribution and impact in the science of their field is a key hallmark of their expertise. What can be readily appraised in a non-subjective fashion is the publication and citation track record of scientists who appear in news media as experts. One can use objective data to quantify the citation impact of the published work of these scientists across science throughout their career, as well as the specific impact that they are having with their scientific publications about COVID-19. While publications and citations do have limitations (as all bibliometric metrics), they are objective, readily quantifiable and offer useful information about scientific impact. Here we aimed to evaluate the overall and COVID-19-specific citation impact of the most highly visible COVID-19 experts in the USA, Denmark, Greece and Switzerland. We also paid particular attention to probing the representation of women among highly visible COVID-19 experts, as it has been previously suggested that women are under-represented among COVID-19 experts in the USA.4
I found it odd that Ioannidis starts out by citing someone named B. D. Weinstein. I briefly did a double-take, thinking that he might have cited Brett Weinstein, who, as you likely know, is an evolutionary biologist who has become one of the most prolific promoters of COVID-19 misinformation, particularly in his credulous cheerleading for the repurposed anti-helminthic drug ivermectin to treat COVID-19, a drug that almost certainly does not work and whose medical literature is hopelessly contaminated with fraudulent studies. Then I figured out that this was Bruce Weinstein. (I realize I digress, but given the mental image I couldn’t help myself.) My weird brain trick about the name Weinstein aside, Ioannidis seems to be grudgingly admitting that it’s possible to be a “performative expert” with experience extensive enough to be a reliable source of information on a topic without having contributed to the scientific literature, but you can tell from what comes next that he’s not enthusiastically endorsing such expertise. So what does he do next? He quickly pivots to his analysis, which involves in essence trying to tear down media “COVID-19 experts” by showing that most of them aren’t what he would consider high-powered contributors to the scientific literature on COVID-19. Reading this introduction, I can’t help but wonder if in Ioannidis’ construct critical care and emergency room doctors on the actual front lines aren’t “expert” enough for the media to contact.
Much of their work is also never published, or if it is it's not in an academic format, so not only are they not in the media but they don't appear in the scientific literature either. And yet, they're the people on the ground who have done most of the actul work on COVID-19
— Health Nerd (@GidMK) December 8, 2021
I will also point out that a handful of such doctors have been among the most prolific promoters of COVID-19 disinformation, but I also note that, by Ioannidis’ metric, an utter crank like Peter McCullough would seem very credible just by the number of COVID-19 publications he boasts of.
Another huge problem rapidly rears its ugly head in the methods section. For example, one question I immediately asked when I first read the abstract to this paper was: Why did Ioannidis choose these three countries other than the US? I can understand choosing the US if you’re based in the US (as Ioannidis is) and because it’s the biggest producer of scientific studies and media. I can understand choosing Greece, given Ioannidis’ Greek ancestry (although one wonders if Ioannidis chose Greece because he has served as a media expert appearing in interviews there). But why Denmark and Switzerland? Why not the UK, France, China, Japan, Australia, or other countries like that? The whole set of methodology seems…odd, particularly the lack of description of the definition of “visible” as applied to media scientists and the acceptance of seemingly any old definition from the sources tapped:
We examined bibliometric indicators of top media experts in the USA, Switzerland, Greece and Denmark. These are countries for which we could identify pre-existing lists of experts who had prominent visibility in media. These lists are typically not published in the peer-reviewed scientific literature (with the exception of the US list that was previously generated and published by members of our team),4 but in media news items in different countries, thus defying the possibility for efficient systematic searches. We therefore asked our colleagues at the Meta-Research Innovation Center at Stanford (and affiliated colleagues who come from different countries if they were aware of any such publicised lists. We accepted these lists regardless of how visibility had been defined in these surveys.
For the USA, we examined the scientific citation impact of all scientists who had appeared between 18 May and 19 June 2020 during prime-time programming on three popular American cable news networks: Fox News Network, CNN and MSNBC. Details on the data collection and selection for the US list and features of the sample have been previously described.4 Of the 220 people who appeared during these programmes, 76 were scientists (47 physicians and 29 PhDs).
For European countries, searches for visible experts were made by local organisations in each country and they pertain to national media visibility. For Denmark, we found a news article that listed the 50 experts who had the highest number of appearances in media during 2020 (television, radio, newspapers).5 For Greece, we found a news article that listed the 12 COVID-19 experts who had the highest television exposure based on measured time of television appearances.6 For Switzerland, the Swiss Media Database (SMD) captures appearances in media in Switzerland. We could find information from a news article7 on the two most commonly appearing names of COVID-19 experts in SMD between mid-January and June 2020.
Also odd are the very specific time periods examined, all early in the pandemic. For instance, for the US, why did Ioannidis choose to look at a one-month period early in the pandemic? Also odd was the choice of the “top 2%” cutoff when examining whether scientists were at the top of their fields, but then I realized that Ioannidis was just mining a database he had already created. I’m with Carl Bergstrom here, when he noted:
How long would it take to look up citation data on 140 researchers in Scopus? In any case, the lead author has the full citation data; he is also the lead author on paper listing the top 2% of researchers that they draw from, and in that paper asserts access to the full dataset.
— Carl T. Bergstrom (@CT_Bergstrom) December 8, 2021
It is indeed as stunningly lazy exercise in bibliometrics, given the small sample size and the self-referential repurposing of a database that Ioannidis’s group created for a different reason before the pandemic. Before I realized that, I was asking as I read: Why not top 5%? Or top 10%? Or even top 25%? Ioannidis’ handwaving justification in the discussion just betrays the laziness that Dr. Bergstrom noted:
First, we should acknowledge that citation metrics are far from being perfect measures of epistemic expertise. Moreover, we focused on using already existing data on the 2% top-cited scientists across each scientific discipline, and we could not examine whether scientists who were not in the top 2% of these pre-existing lists might be in the top 3% or in the bottom 5% of citation impact. Obviously, many scientists may still have considerable epistemic expertise even if they are not strictly in the top 2% of citation indicators.
Could not? No, Ioannidis chose not to go beyond his database, because looking at a larger sample would have required additional work beyond just mining an existing database created by his own research group. Lazy indeed.
There’s also another issue other than the misguided assumption that somehow being in the top 2% of researchers based on publications in the peer-reviewed literature and citations of one’s publications correlates with effective science communication, knowledge of public policy, and practical experience. Ioannidis’ analysis is very, very simplistic with respect to COVID-19 expertise, breaking publication records down to “COVID-19-related” and everything else. However, a molecular virologist working on the mechanisms of SARS-CoV-2 infection would not be a good choice to comment on mask or vaccine mandates, just as a public health expert would not be a good choice to comment on the finer points of the immune response to the disease or to explain, for instance, why the spike protein made by mRNA-based COVID-19 vaccines is not “deadly” and does not shed, as antivaxxers frequently claim. The list goes on.
Moving on, let’s look at Ioannidis’ findings:
We assessed 76 COVID-19 experts who were highly visible in US prime-time cable news, and 50, 12 and 2 highly visible experts in media in Denmark, Greece and Switzerland, respectively. Of those, 23/76, 10/50, 2/12 and 0/2 were among the top 2% of overall citation impact among scientists in the same discipline worldwide. Moreover, 37/76, 15/50, 7/12 and 2/2 had published anything on COVID-19 that was indexed in Scopus as of 30 August 2021. Only 18/76, 6/50, 2/12 and 0/2 of the highly visible COVID-19 media experts were women. 55 scientists in the USA, 5 in Denmark, 64 in Greece and 56 in Switzerland had a higher citation impact for their COVID-19 work than any of the evaluated highly visible media COVID-19 experts in the respective country; 10/55, 2/5, 22/64 and 14/56 of them were women.
I will right here, right now, point out the one useful observation in Ioannidis’ paper, namely the paucity of scientists who are women and also prominent in the media. That’s the single useful piece of information he has provided here, and he’s right that the media and scientists need to do something to diversify their pool of experts. The rest? Not so much. He is also correct to note that it is likely that racial minorities are likely to have been underrepresented, although I have a hard time believing his claim that he “could not assess the racial background of media experts”. Seriously? Has Ioannidis never heard of Google? Many of these experts likely have Wikipedia entries, and, if they don’t, by his very design, we know that they have appeared in media reports on multiple occasions. He could have had his minions just look up those media reports and see if they could glean the racial background of these scientists! Certainly, it might not have been possible in some cases, but in most it likely would have. Bergstrome is correct. This analysis is lazy.
Here’s the funny part. Ioannidis seems to recognize the problem at the heart of his analysis:
The best or the most cited scientists should not necessarily be the ones who appear the most frequently in media. Some of the experts whom we analysed have accumulated a track record of massive media engagements that require an enormous commitment of time and psyche. Many highly competitive, excellent scientists would find it difficult or even impossible to pursue their scholarly work and have an intense media presence at the same time. Moreover, especially for COVID-19, polarisation, politics and an environment of conspiracy, mistrust, and public unrest and rage may have disincentivised many leading scientists from engaging with media. Women and minorities may feel even more disincentivised in this environment.
Nevertheless, communication with the wider public is an important mission of science, medicine and public health. Information on COVID-19 in media has been shown to be of questionable quality.1 2 Its quantity is clearly immense. The vast majority is produced and disseminated by people without any scientific training and with little or no self-reflection on their inadequacy to judge complex and rapidly evolving scientific concepts. It may be impossible to diminish the bulk of information, but at a minimum its quality should be improved. Engaging qualified experts may be critical in this regard.
While Ioannidis is correct to note that political polarization, conspiracy theories, and harsh rhetoric have likely disincentivized some scientists from engaging with the media and that such engagement can take a massive commitment of time, effort, and psyche that could detract from the time that they can do science (just ask Dr. Peter Hotez, if you don’t believe me), I couldn’t help but wonder: Why, then, is he seemingly arguing that more scientists with high publication citation metrics should engage with the media and implying that the situation now is so bad because so many of the scientists who do media appearances aren’t in the top 2% measured by citation metrics? It makes no sense. I would also argue that some of what he found was actually quite reassuring. 23/76 media experts in the US during that one-month time period were in the top 2% of their fields? That’s over 30%, which I’d characterize as awesome! Ten out of 50 in Denmark? That’s pretty darned impressive, too! Two of twelve in Greece? I’d say that’s fairly impressive, as well. In all of those countries, a highly disproportionate percentage of scientists engaging with the media who were, by Ioannidis’ metrics, in the top 2% of their fields!
The assumption at the heart of this paper, namely that a high rank in terms of scientific publications and citation scores should correlate with more effective public communication is entirely off-base, as is the very simplistic breakdown of expertise into “COVID-19” compared to everything else. It also blew up multiple irony meters to see Ioannidis urge science communicators to engage “qualified experts” to improve the quality of science communication, when he seems not to have done so on so many occasions.
I hesitated to suggest this, but upon reflection decided that I had to. There appears to be another subtext here, one that is suggested by Ioannidis’ competing interest statement at the end:
JPI has given some COVID-19 media interviews (a hundredfold less compared with some of the listed experts) that have resulted in smearing, hate emails, threats, censoring, hostile behaviour, harassment and a life-threatening experience for a family member. He is among the top-cited scientists in both the overall and COVID-19-specific citation databases used in the presented analyses. In his Stanford web page, he admits that despite being ‘among the 10 scientists worldwide who are currently the most commonly cited; when contrasted against my vast ignorance, these values offer excellent proof that citation metrics can be horribly unreliable’. RJ is also listed among the top-cited scientists in the overall citation database.
Let me just preface my comments here by saying quite unequivocally that having experienced harassment, death threats, and the like myself, I will always say that this is not acceptable, no matter how much I might disagree with the person experiencing them, period. That being said, I have to address part of his statement not relating to threats of violence, noting how Ioannidis has strategically placed it in a list of that includes hate email, threats, harassment, and the like. No one—and I mean no one—is “censoring” Ioannidis. No one. Harsh criticism of bad takes is not “censorship”. After all, he is one of the most famous, most published and widely cited scientists in the world (if not the most), and media outlets are likely clamoring for him to give interviews, even as Scientific American published a defense of him against the “overreaction” to his work and predictions. Early in the pandemic, he had the ear of the Trump administration and lobbied to get it to listen to his ideas on the pandemic:
Flipside: if you attempt to meet with the US President to steer policy based on your recklessly incorrect conclusions, harsh criticism is not "smearing". Similarly, if a private company chooses not to propagate obvious misinformation, that is not censorship.
— Carl T. Bergstrom (@CT_Bergstrom) December 8, 2021
His self-deprecating self-characterization aside, notice how Ioannidis makes damned sure that you know that he is in the top 2% and included in the analysis. In context, that self-deprecation comes across as a front for an assertion of his authority on COVID-19.
A hidden agenda?
Dr. Ioannidis has clearly been very much stung by the harsh criticism of his statements made to the media during the pandemic, and, make no mistake, he has laid down some doozies, beginning very early in the pandemic with his infamous lowball estimate for deaths from COVID-19 that he said could be “buried within the noise of the estimate of deaths from ‘influenza-like illness'” or how “at most we might have casually noted that flu this season seems to be a bit worse than average”.
As we here at SBM have said time and time again, there’s no shame in making such an off-base estimate if you acknowledge your error and correct it as new information comes in. Ioannidis has been remarkably resistant to that. Worse, Ioannidis lashed out in an article in a peer-reviewed journal that for which he had recently served as editor at a graduate student who had had the temerity to publish results that conflicted with Ioannidis’ results and for having criticized Ioannidis on social media. It was the most egregious example of “punching down” that I had seen in a long time. Then there was Ioannidis’ repeated credulous repetitions of conspiracy theories, such as the one based on a misinterpretation of how death certificates are filled out claiming that people die “with COVID-19” far more often than they die “of COVID-19”:
I mean, even the Indian paper that is cited to support this point argues that the main issue is MISSING COVID-19 deaths, not improperly recording more deaths than actually occurred pic.twitter.com/vhY9CAxIrA
— Health Nerd (@GidMK) December 8, 2021
Worse, he repeated as fact a claim from early in the pandemic that doctors were killing people by being too fast to intubate them. It was, in essence, a conspiracy theory. Let’s just put it this way. When a scientist as famous as John Ioannidis starts repeating misinformation like this, people are going to believe it, and he’s going to face harsh criticism that is much deserved (and not “censorship”).
The Sagan Effect still exists
In the quarter century since Carl Sagan’s untimely passing, a great deal of progress has been made in terms of erasing the stigma that exists against scientists who engage with the public to communicate about science and its findings. However, the unspoken assumption behind Ioannidis’ publication tells us that there’s still a long way to go. The problem appears to be an attitude that science communication is something that’s so easy and trivial that anyone can do it.
Also there is no measure of the "claim to expertise" – just being willing to communicate via the media is not a claim of expertise.
— Matt Nurse (@matt_nurse) December 8, 2021
This is the assumption in the paper that grated me the most. Communication isn’t a throw-away thing anyone can just “do” when they feel like it. It’s a discipline & set of skills unto itself. Most can learn the skills, but they aren’t trivial.
— Tara Haelle (@tarahaelle) December 8, 2021
I can attest to the observation that the skills needed for science communication aren’t trivial. I didn’t do a formal training program and picked the skill up as I went along, and I’m always aware of that and my shortcomings. (Sometimes, I cringe when I read some of my old writing from early in my blogging “career”.) There’s a catchphrase from the second Dirty Harry movie Magnum Force in which Dirty Harry observes, “A man’s got to know his limitations.” I know mine. I’m very good at blogging and Twitter, pretty decent at public speaking and podcast/radio appearances, and not so great at TV or anything involving video. (I really need to work to get better at that.) What I’m saying is that not only is science communication as set of skills that most scientists can learn but are not trivial to master, but that there are subsets of science communication for different media. I’d have to say that Ioannidis really either hasn’t heard, has ignored, or doesn’t care about this saying about knowing one’s limitations.
I haven’t even gotten into the other mistaken assumptions behind Ioannidis’ analysis:
There are other problematic assumptions here. 1) being a great scientist doesn’t make one a great science communicator. 2) citation impact is a problematic measure of expertise. 3) bandwidth is finite: spend it on the science or on cable TV? The former is a reasonable choice. Etc
— Tara Haelle (@tarahaelle) December 8, 2021
In the end, as much as those of us who devote time to science communication would like to think otherwise, the Sagan effect still exists. True, it’s not nearly as harsh as it was 30 years ago, but it’s still there, and papers like this one by Ioannidis are evidence of that. The ironic and depressing thing is that the “antiscience” communicators, such as Dr. Mehmet Oz, COVID-19 contrarians, and antivaxxers like Del Bigtree, are often very effective communicators, and we really need more skilled science communicators if we’re to have a chance of countering their disinformation. Once again, Ioannidis isn’t helping.
Worse, there is an ironic aspect to this whole affair:
he is now contributing to the problem he identified….full circle.
— R Marcucio (@mcfunny) December 8, 2021
Sadly, Ioannidis has indeed come full circle. He first became famous because of his paper about how most scientific findings turn out to be wrong, and now he’s contributing to the very problem that he identified.