Shares

I started this blog not only to create a useful resource but to have a conversation. As our tagline suggests, we want to explore (not dictate) the relationship between science and the practice of medicine. As it turns out, that relationship is very complicated. That conversation takes place in the comments, but also by responding to other outlets who are, in turn, responding to us.

No doubt, we are promoting a specific approach to medicine that we call science-based. Briefly, we feel that medical interventions that are safe and effective are inherently superior to interventions that are unsafe or ineffective (that should be a hopefully non-controversial premise). We also feel, as the name implies, that the best way to determine which interventions are safe and effective is by looking critically at all the available science. This includes examining the process and institutions of medical science themselves, how information is reported and consumed, the role of various types of bias, different types of evidence, and thinking carefully about where we should place the threshold of evidence when informing clinical practice and health consumer decisions.

Further we feel strongly that there should be one, thoughtful, science-based standard that applies uniformly to all of health care.

We spend a considerable amount of time pushing back against those who explicitly disagree with this approach, and who are promoting other approaches to medicine and health. Many of these non-SBM approaches have been marketed under the umbrella brand of “alternative,” “complementary,” or “integrative” medicine.

We know and expect that doing what we do will attract critics, and we are happy to have them. I tend to respond directly to critics when either I think they have a point, or I think they are misrepresenting our position and I want to use a response as an opportunity to set the record straight. Sometimes the best way to explain a concept is to respond directly to someone who has a misconception about that concept.

Responding to SelfHacked

In that spirit, I want to address some of the points made in a blog post on the website SelfHACKED. The post is from a few years ago (although we just noticed it), but is still attracting comments. Also, I checked and there do not appear to be any updated posts on the site about SBM, so this still stands as their latest opinion. Some of the criticisms may have therefore been addressed by subsequent posts on SBM, so to be fair I cannot hold them to account for articles we wrote after their post.

Here is how the editors of SelfHacked describe their philosophy:

At Selfhacked, we plan on dissecting every facet of human biology and looking at potential remedies or ways to reverse our biological weaknesses – a first principle’s [sic] approach to health and medicine. This will all be in a format that will be accessible to everyone.
People need to take control of their own health and not rely on anyone. And the best weapon for that is solid information, combined with a spirit of self-experimentation.

With all of that in mind, let’s take a look at some of their specific criticisms. They do not get off to a good start:

With regard to health, true skepticism means to be doubtful in a belief that some treatment works, but at the same time also be doubtful in forming opinions that something doesn’t work.
This means you shouldn’t form beliefs one way or the other. Your attitude should be one of not knowing if you want to be truly skeptical.
If you read SBM posts, you will see they are skeptical of ‘alternative’ remedies, but they believe that these remedies don’t work.
If you ask them about any remedy that isn’t proven by science, they will tell you the chance is close to zero that it works. They believe the people who feel an effect from them are just experiencing the placebo effect.

This is both a straw man and a false premise. Being skeptical does not mean you do not form opinions. That definition is closer to the philosophical skeptics who argue against all knowledge, but that is not what modern skepticism promotes. We are scientific skeptics. That means you use science and reason to inform tentative conclusions about what is likely to be true.

Further, medicine is applied science. This means you have to make decisions. That is the whole point of SBM, to discuss how science informs medical decisions. You can’t make decisions if your position is endlessly one of not knowing. Rather, you should ask, what is the best decision we can make at the current time considering all the scientific evidence in the context of risk vs benefit and personal preference.

But of course people don’t like to be told that their preferred treatment probably doesn’t work, especially if they have tried it and feel that it does. Some go as far as claiming that we can never know that something does not work, which is convenient if you want to promote dubious treatments.

Now for the straw man – it is demonstrably not our position that anything not proven by science has a close to zero chance of working. We have never taken such a position explicitly, nor can you infer that position from what we write. Our stated position is that plausibility and clinical evidence are two distinct considerations. A treatment may be plausible but untested, or highly implausible. We certainly treat such claims differently.

Any fair read of SBM articles will show that we are very careful not to confuse unproven with disproven, and lack of evidence for efficacy with evidence for lack of efficacy. Just yesterday Harriet wrote:

Vital Stem is a dietary supplement mixture that supposedly reverses the changes of normal aging by increasing the body’s production of stem cells. We can’t know if it works, because it hasn’t been tested.

In my experience this criticism comes from a careless read of what we actually write, through a filter biased by a specific outlook. There is also often a specific agenda to support the position that SBM has an unreasonably high standard for evidence, and therefore we reject treatments that might actually work. More on this point below.

Next they make an odd point about plausibility. They argue that, sure, some things like homeopathy are incredibly implausible, but herbs and supplements are not.

All 300 or so supplements/interventions that I’ve tried had a noticeable effect (except grounding). The question isn’t if they have an effect, but what that effect is and if it benefits you more than it harms you.
I know supplements have an effect because I keep upping the dosage until I’m certain I feel an effect.

This is another straw man, implying we argue that herbs as a category are implausible. We have explicitly stated that herbs and supplements, because they are actual chemicals with potential pharmacological effects, are toward the plausible end of the spectrum when it comes to unproven therapies. Our criticisms, rather, are that herbs are drugs – they are dirty, unpurified, unregulated, and understudied drugs. Most fail due to poor bioavailability.

I agree with the authors in that the important question is – what effects do specific herbs have and can they be used in a way that the benefits outweigh the harm? I strongly disagree with their method for answering this question, self-experimentation (i.e. anecdotal evidence). I would not, by the way, just keeping upping the dose of a substance until you feel something. You might not feel something until after your kidneys shut down. This is, in fact, reckless and horrible advice.

The authors at SelfHacked take the explicit position that the best way to know if something works is to take it. They brush off criticisms about placebo effects and the difficulty of determining efficacy from subjective assessments and an N of 1. In typical fashion they partly push back against SBM’s criticism of this position by making another straw man out of our position. We do not say that the patient’s experience is unimportant. Of course, if a drug does not work for a specific patient we would not advocate using it. You have to individualize treatment. Anecdotal evidence also can be useful in forming hypotheses to be tested.

But you cannot determine efficacy from anecdotes. This has been clearly established over the last century of scientific medicine. Anecdotes were used to support radioactive tonics, and worthless radio machines, and countless other inert treatments that we now know with a high degree of confidence do not work. Anecdotes are much more likely to mislead you than inform you. This is clear, and you have to be in abject denial or ignorance of history to disagree.

More straw men coming (you see a pattern here):

Here’s how to know SBMs opinion before you read any article. If you find that even one of their thousands of articles don’t fit this paradigm, please post it.

  • If it’s natural, it’s not effective.
  • If a study has shown a natural product effective, it’s flawed.
  • If the study isn’t flawed, it’s clinically insignificant.
  • If the FDA claims something is safe, SBM will never disagree.
  • If a drug is FDA approved, it’s effective. SBM will never question its efficacy or ‘clinical significance’.

This is a clear ad hominem fallacy, or at least attempt at poisoning the well. The implication is that our opinions are ideological and biased. This dismissive approach replaces a detailed analysis of our specific opinions, which we always back up with a detailed analysis of the basic and clinical science. If you can’t argue against the details of someone’s position, it is easier to brush it aside with a broad dismissal.

None of what is stated above is an actual position of ours, not philosophically or in practice. If there is a kernel of truth to some of the characterizations, it is not for the reason implied. While SBM advocates for one standard across all of medicine, we do spend a disproportionate amount of our time dealing with the more fringe end of the spectrum. This is a deliberate editorial choice. This reflects our niche, not our philosophy, and we have stated this numerous times.

In addition to our clinical specialties, we all share a common specialty in critically analyzing pseudoscience. We don’t need to spend our time doing systematic reviews of mainstream treatments, because other experts are fully capable (and probably better equipped) to do so. Our efforts would be redundant there. This does not mean we don’t think it’s important. That is just not our niche.

Where EBM and mainstream medicine fall down is in putting pseudoscience into a thoughtful context, because they are not familiar with it. We are.

One common form of pseudoscience in today’s culture is to promote a treatment as “natural.” The explicit claim is that something which is natural is better and more likely to be safe and effective than something which is synthetic. First, this is a false dichotomy, and there is no operational definition of what is “natural.” Second, this is simply not true. Being derived from a natural source says nothing about safety or efficacy. Many natural substances are terrible poisons. Therefore, by necessity, we spend a lot of time pushing back against the appeal to nature fallacy.

This does not mean we argue a “natural” substance cannot be effective. That is simply absurd. We never said it or implied it. In fact we have used the argument that there are many substances derived from natural sources that are part of mainstream medicine, including many of our modern pharmaceuticals. We have directly endorsed pharmacognosy, which is the scientific study of natural substances.

The next two points are similarly silly. First, as we have explicitly stated many times, most studies are flawed, and one way in which they are flawed is that effect sizes are small and not clinically significant. SBM is about promoting a reasonably high standard of evidence, and it is very common to use studies that are either cherry picked or fatally flawed to promote treatments that are not adequately supported by evidence.

Regarding the FDA, they have a very high standard for safety. It is not perfect, but it is reasonable. This does not mean no side effects or risk free, and drugs do slip through the cracks (as the Vioxx scandal shows). It is not our position that they are always correct, just that their safety standards are sufficiently high. If the authors think we have taken the wrong position on a specific drug, then they should point it out. You may notice, by the way, that the article is extremely light on specific examples. That should serve as a big clue. In fact, they state that sometimes that are criticizing our apparent “attitude” rather than our stated position. This is a bit convenient, and allows them to just offer their subjective opinion without backing it up with specific examples.

Regarding efficacy, again, for drugs, the FDA has a pretty high standard. But, some things still slip through the cracks and we will criticize this. The Cephaly device for migraines, for example, I stated was prematurely approved based on unconvincing evidence. We have had frank discussions of the efficacy of the flu vaccine, and of antidepressants for mild to moderate depression. This claim is simply at odds with the evidence.

Next comes their biggest whopper:

The type of evidence that SBM advocates is a random threshold of evidence. There’s no reason it can’t be lower.

This claim shows that they have not read SBM to any serious degree. We spend a lot of time exploring the exact question of where the threshold of evidence should be an why. To say this threshold is “random” is to ignore literally hundreds of articles we have written on the subject. In fact I wrote an article called “evidence thresholds” in 2013, before their criticism.

There are multiple independent lines of evidence that we have specifically examined and used to support our position regarding evidence thresholds. We have discussed evidence from Ioannidis about the predictive value of positive studies (in fact, most are wrong), evidence from reproducibility attempts, from the Simmons et al. article on p-hacking, from research on reversals, on the predictive value of promising basic science, and examining the history of individual treatments. We have examined the relative value of p-values vs Bayesian analysis. We have deliberately built what we feel is a very strong case, based on a long and growing list of published studies, for where the threshold of evidence should be. To dismiss this as “random” is beyond laughable.

I see such criticism as part of a broader agenda – to specifically lower the standards of evidence in medicine to admit whatever it is that someone wants to promote that is not backed by adequate evidence.

There is a lot more nonsense, but here is one final point:

I really don’t like the term science-based medicine, because it paints an inaccurate picture.
Much of the scientific knowledge we have is based on animal studies, yet we call it science.
However, SBM is not referring to this type of science with regard to medicine. They are referring to large, replicated, double-blind, placebo-controlled trials, published in the top 3 journals.

This is actually the opposite of our position. One of our specific criticisms of EBM is that they rely too heavily on placebo-controlled clinical trials, and ignore other types of evidence, such as basic science. We have specifically stated this numerous times. We call ourselves science-based, because we think all science should be considered, not just clinical evidence.

Where the authors are profoundly confused is that we also discuss the context of each type of evidence. We don’t dismiss evidence, we just put it into its proper context.

Preclinical studies are useful for determine plausibility and to some degree safety. They are not adequate for determining efficacy. A tiny percentage, less than 1%, of promising treatments from animal or other basic science studies turn out to be useful in humans. That is a fact.

So, basic science is good for plausibility and for preliminary safety. Preliminary clinical trials are unreliable but inform later more definitive clinical trials. Only replicable rigorous clinical trials have any real predictive value when it comes to efficacy – but these studies need to be put into the context of prior plausibility based on basic and other clinical science. Pragmatic studies are useful in informing real-world use of treatments, but cannot be used to determine efficacy.

Again – these are all noncontroversial positions generally accepted by actual scientists who know what they are talking about. We are not saying anything new or radical, just pointing this out and making sure it is not forgotten when considering individual questions in medicine.

It is actually our critics who are taking a new and radical position, one in opposition to the published evidence. They are the ones trying to change (specifically reduce) the usual standards of evidence that have evolved over the last century. If anything we are just trying to tweak those standards in light of recent research which informs how predictive and reliable current medical research is.

Conclusion: The need for charity

I invite the authors of the SelfHacked criticism to respond to this response, and also to update their article after actually reading some relevant SBM posts on the topics of their criticism.

I will also point out that there is a general lesson to learned from the hack job offered by the SelfHacked authors. If you are going to criticize someone else’s position, you should make a sincere effort to understand what their position actually is, and to be fair to their position (the principle of charity). Also back up your claims with specific examples. If you can’t, there may be a reason for that.

Shares

Author

Posted by Steven Novella