It’s always preferable to have objective empirical evidence to inform an opinion, rather than just subjective impressions. Confirmation bias will make it seem as if the facts support your opinion, even when they don’t. Of course, when objective evidence (such as published studies) does seem to support your position, you still have to keep your critical shields up. Confirmation bias can still kick in, resulting in cherry-picking favorable evidence, finding fault with studies whose conclusions you don’t like, and too-easily accepting those that confirm your position.
I therefore had to be careful in evaluating the following study from the BMJ, because it nicely confirms what I and many others here at SBM have been saying for years – recommendations made by TV doctors, particularly Dr. Oz, are unreliable and insufficiently based on evidence.
This was a prospective study that:
…randomly selected 40 episodes of each of The Dr Oz Show and The Doctors from early 2013 and identified and evaluated all recommendations made on each program.
They had a team of experts review the scientific research to see if there was any support for the recommendations. They found that, for The Dr. Oz Show, there was some evidence to support the advice 46% of the time, the evidence contradicted the advice 15% of the time, and there was no evidence either way 39% of the time. The Doctors fared a little better, with 63%, 14%, and 24% respectively.
This is actually worse than I suspected. My previous impression was that Dr. Oz gave out mostly standard medical advice, peppered with pseudoscientific or overhyped nonsense. I guess I have not been watching carefully recently, and his descent into snake oil has been steeper than I feared. Less than half of his recommendations were based on any evidence.
Keep in mind also how low a standard this is. Even a single case report, the lowest form of scientific medical evidence, was considered “some evidence.” This means that the percent of his recommendations that reach a reasonable science-based threshold is likely much lower. The Doctors fared only a little better, at 63%. When the authors used a slightly higher threshold, of “Believable or somewhat believable evidence” then The Dr. Oz Show’s advice fell to 33% and The Doctors to 53%.
Also keep in mind that for both, about one in seven recommendations was contradicted by the evidence.
Further, “The magnitude of benefit was described for 17% of the recommendations on The Dr Oz Show and 11% on The Doctors.” This means that even when there was some evidence to support a benefit, the benefit might be minuscule or clinically insignificant, but the magnitude is rarely discussed. Therefore the percent of recommendations that have reasonable evidence for a clinically-significant effect is lower still.
This fits one main criticism of Dr. Oz, that he promotes as “miraculous” new supplements that either have no effect or which may have a small effect.
Even using the most favorable criteria, this means that you are twice as likely to get bad advice than good advice from The Dr. Oz Show, and with The Doctors, it’s a coin flip. Using SBM criteria, the recommendations given on either show are overwhelming likely to be misleading, overhyped, or simply wrong.
What you are apparently not getting from either of these two Dr. programs is an expert synthesis of the latest available scientific evidence, putting basic science and clinical science into a reasonable perspective in order to make best-practice recommendations. That is the standard of care within medicine, which is supposed to be a scientific profession.
It’s difficult to argue with the authors’ conclusions:
The public should be skeptical about recommendations made on medical talk shows.