Shares

I say this at the beginning of nearly every post that I write on this topic, but it bears repeating. It is an unquestioned belief among believers in alternative medicine and even just among many people who do not trust conventional medicine that conventional medicine kills. Not only does exaggerating the number of people who die due to medical complications or errors fit in with the world view of people like Gary Null, Mike Adams, and Joe Mercola, but it’s good for business. After all, if conventional medicine is as dangerous as claimed, then the quackery peddled by the likes of Null, Adams and Mercola starts looking better in comparison. Unfortunately, there are a number of academics more than willing to provide quacks with inflated estimates of deaths due to medical error. The most famous of these is Dr. Martin Makary of Johns Hopkins University, who published a review (not an original study, as those citing his estimates like to claim) estimating that the number of preventable deaths due to medical error is between 250,000 and 400,000 a year, thus cementing the common (and false) trope that “medical error is the third leading cause of death in the US” into the public consciousness and thereby doing untold damage to public confidence in medicine. As I pointed out at the time, if this estimate were correct, it would mean that between 35% and 56% of all in-hospital deaths are due to medical error and that medical error causes between 10% and 15% of all deaths in the US. The innumeracy that is required to believe such estimates beggars the imagination.

Of course, even with academics providing them with hugely inflated estimates of deaths due to medical error, quacks remain unsatisfied. Perhaps the most famous estimate written by quacks is Gary Null’s Death by Medicine, each new version of which increases the estimate of the number of people who die because of medical errors and “conventional medicine,” to the point where his estimate approaches 800,000 deaths per year, or more than one third of all deaths in the US. (I strongly suspect that Null will find a way to get that estimate up over one million before too long.) That’s why it was refreshing to read a new meta-analysis written (PDF) by investigators at Yale University last week. It provides an estimate that’s significantly larger than the last paper on the topic that I discussed, but more than ten-fold lower than the inflated “third leading cause of death” numbers.

Before I discuss the new Yale paper, I will, as I always do, provide a bit of history. The attempt to quantify how many deaths are attributable to medical error began in earnest in 2000 with the Institute of Medicine’s To Err Is Human, which estimated that the death rate due to medical error was 44,000 to 96,000, roughly one to two times the death rate from automobiles. (This is the estimate to which the Yale investigators, led by Craig Gunderson with first author Benjamin Rodwin, compare their estimates.) In response to the study, the quality improvement (QI) revolution began. Every hospital began implementing QI initiatives. Indeed, I was co-director of a statewide QI effort for breast cancer patients for three years. Also, as I mentioned above, the estimates for “death by medicine” seemingly never do anything but keep increasing. They went from 100,000 to 200,000 and now as high as 400,000. Basically, when it comes to these estimates, it seems as though everyone is in a race to see who can blame the most deaths on medical errors, and each time a larger estimate is published the press gobbles it up uncritically. In contrast, each time a study publishes a more reasonable estimate, all we hear are crickets.

How did we get here? As Mark Hoofnagle put it:

Mark was referring to the use of the Institute for Healthcare Improvement’s Global Trigger Tool, which is arguably way too sensitive. Also, as I explained in my deconstruction of the Johns Hopkins paper, the authors conflated unavoidable complications with medical errors, didn’t consider very well whether the deaths were potentially preventable, and extrapolated from small numbers. Many of these studies also used administrative databases, which are primarily designed for insurance billing and thus not very good for other purposes.

The Yale paper

If the estimates between 200,000 and 400,000 are way too high, what is the real number of deaths that can be attributed to medical error? Here’s where the meta-analysis by Rodwin et al comes in, estimating the number of preventable deaths at just over 22,000 per year. It’s not even in the top ten. In fact, preventable deaths due to medical error represent less than 1% of all deaths. That number is, of course, still too high, and efforts to decrease should and will continue. (It can never be zero, given that medicine is a system run by human beings, who are inherently imperfect and sometimes make mistakes.) However, it’s nowhere near the third leading cause of death in the US.

How did Rodwin et al derive their estimate? First, here’s their rationale:

In 1999, the Institute of Medicine (IOM) published its seminal report on medical errors, To Err Is Human: Building a Safer Health System.1 This widely cited analysis extrapolated from two studies of adverse events in hospitals and concluded that between 44,000 and 98,000 Americans die annually due to preventable medical error. The two referenced studies evaluated deaths from medical error by first determining the frequency of adverse events in hospitals and then separately deciding whether the adverse event was preventable and whether the adverse event caused harm.2, 3 More recently, a report including several additional studies concluded that medical error causes more than 250,000 inpatient deaths per year in the USA, making it the third leading cause of death behind only cancer and heart disease.4

Studies that review series of admissions and determine whether adverse events occurred, whether the events were preventable, and what harms resulted have been criticized for indirectness when used to estimate the number of deaths due to medical error.5, 6 In contrast, studies of inpatient deaths offer a more direct way of estimating the rate of preventable deaths. We undertook a systematic review and meta-analysis of studies that reviewed case series of inpatient deaths and used physician review to determine the proportion of preventable deaths.

To examine the question of how many deaths per year are preventable and possibly due to medical error, the authors carried out a systematic review and meta-analysis and took care to make separate estimates for patients with less than a three month life expectancy and more than a three month life expectancy. (Spoiler alert: They found that the vast majority of preventable deaths occur in patients with less than a three month life expectancy.) They also only included studies in which the included cases were reviewed by physicians to determine if the death was preventable:

All studies of case series of adult patients who died in the hospital and were reviewed by physicians to determine if the death was preventable were included. Non-English studies were included and translated using Google Translate, which has been shown to be a viable tool for the purpose of abstracting data for systematic reviews.10 Studies which evaluated a series of inpatient admissions to determine if there was a preventable adverse event, and then determined if that adverse event contributed to death, such as those included in the 1999 Institute of Medicine report, were excluded. We primarily searched for studies of consecutive or randomly selected inpatient deaths, but also included studies that used cohorts with selection criteria but analyzed these separately. Studies limited to specific populations such as pediatric, trauma, or maternity patients were excluded because our primary research question was to determine the overall rate of preventable mortality in hospitalized patients and these populations are less generalizable.

The winnowing process to select the studies resulted in sixteen studies from a variety of countries that fit the inclusion criteria, eight of which were of random or consecutive groups of patients and eight of which were of cohorts with selection criteria, the latter of which were analyzed separately. Four of the studies examined data from multiple hospitals. Of the eight studies that could be included in a quantitative meta-analysis (the ones analyzing random or consecutive groups of patients), all defined preventable deaths as those that were rated as greater than 50% chance of having been preventable, while seven of the studies used a Likert scale to define preventability while one used a scale of 0–100%. Five studies used multiple reviewers, three of which used consensus to arbitrate differences of opinion, while one used a third reviewer and one used latent class analysis. Six of the studies included adverse events prior to admission.

The results were as follows for the percentages of hospital deaths deemed more likely than not to have been preventable:

The overall pooled rate was 3.1% (95% CI 2.2–4.1%). Individual studies ranged from 1.4 to 4.4% preventable mortality with statistically significant evidence for heterogeneity (I2 = 84%, p < 0.001). The eight studies with selection criteria reported rates of preventable mortality ranging from 0.5 to 26.9%. One study from 1988 reported that 26.9% of 182 deaths for myocardial infarction, stroke, or pneumonia were > 50% likely to have been preventable.23 A study which evaluated 124 patients from the Emergency Department who died within 24 h of admission found that 25.8% of these deaths could have been prevented.29 Another study from 1994 reported that 21.6% of 22 deaths from certain diagnostic groups were at least “somewhat likely” to have been preventable.28 A large recent study from the Netherlands reported 9.4% of 2182 deaths as “potentially preventable.” The remaining studies with selection criteria reported rates of 0.5–6.2% preventable deaths.

And overall:

Overall, our systematic review found eight studies of hospitalized patients that reviewed case series of consecutive or randomly selected inpatient deaths and found that 3.1% of 12,503 deaths were judged to have been preventable. Additionally, two studies reported rates of preventable deaths for patients with at least 3 months life expectancy and reported that between 0.5 and 1.0% of these deaths were preventable. If these rates are multiplied by the number of annual deaths of hospitalized patients in the USA, our estimates equate to approximately 22,165 preventable deaths annually and up to 7,150 preventable deaths among patients with greater than 3 months life expectancy.31

I note that that latter estimate of ~7,000 deaths a year in previously healthy people is pretty close to the estimate of ~5,000 preventable deaths per year noted in a study from last year that I discussed.

So what, specifically, were the errors that led to preventable hospital deaths? I don’t know why the authors buried the table in the supplemental materials, but I dug it out and examined the main causes. (The numbers in parentheses are the ranges of percentages of preventable deaths between the studies examined.) The main causes are:

  • Clinical monitoring or management (6-53%)
  • Diagnostic error (13-47%)
  • Surgery/procedure (4-38%)
  • Drug or fluid-related (4-21%)
  • Other clinical (4-50%)
  • Infection or antibiotic related (2-14%)
  • Supervision (24%, there being only one study citing this as a cause)
  • Technical problem (6-9%)
  • Inpatient fall (6.5%, only one study again)
  • Transition of care (3.2%, only one study again)

Clearly, the range is wide, depending on the hospital and country. The top three don’t surprise me either, although, as I’ve pointed out before, for surgical procedures it’s not always easy to tell if a surgical mistake versus a known complication from the surgery is the cause of death. Even when carried out by expert hands, surgical procedures can cause significant complications (such as bleeding) in some patients and even death in a handful. This is true for even seemingly very low risk procedures. Similarly, diagnostic errors are tricky as well, as the error often only becomes apparent in retrospect. Nonetheless, this analysis does provide an idea of the sorts of medical errors that can result in potentially preventable deaths. Moreover, because the standard was simply that a death was more likely than not to have been due to medical error and thus preventable, the figure of 22K deaths/year is likely an overestimate, given that it includes a lot of deaths whose cause might not have been medical error.

So how do Rodwin et al account for the huge difference between their estimate and the Institute of Medicine’s estimate of 44,000-98,000 preventable deaths due to medical error per year and, in particular, the ludicrously inflated estimates of greater than a quarter of a million deaths that produced the “third leading cause of death”? It’s mainly because they didn’t use trigger tools to look for complications and then make estimates of how likely those complications were to be preventable and to have resulted in the death of the patient:

These results contrast with earlier estimates of medical error which reported higher rates of preventable mortality. The IOM report as well as similar subsequent reviews has reported much higher estimates.4 Numerous authors have criticized these prior estimates for varied methodologic reasons,5, 6 including poorly described methods for determining preventability and causality for death, as well as for indirectness—these studies have in common that they primarily attempt to define the incidence of adverse events in series of hospitalized patients and then secondarily estimate the likelihood that the adverse event was preventable and the likelihood that the adverse event, rather than underlying disease, caused the patient’s death. The studies we reviewed have the advantage of both using as their denominator a series of inpatient deaths rather than admissions and directly assessing the deaths for preventability.

This study is not without limitations, however. For one thing, the studies included rely only on physician judgment to determine whether a given death examined was preventable. Given that there is no agreed-upon standard to determine whether a death was preventable, this methodology introduces potential biases, such as hindsight bias after poor outcomes. This particular bias, sometimes called the “knew-it-all-along” phenomenon, is very common after traumatic events or poor outcomes and describes the tendency of humans, examining an event that’s already happened, to view the outcome as more predictable than it actually was at the time before the outcome occurred, when the people involved were making the decisions that led to the outcome. Also, all determinations were made by retrospective chart review, and anyone who’s ever taken care of patients in a hospital knows that the medical record often lacks important information regarding management and death. Perhaps that’s why the inter-operator reliability between doctors reviewing these charts was consistently in the fair to moderate range in these studies. In any event, hindsight bias would tend to increase the estimate of preventable deaths, as the doctors reviewing the chart, knowing the outcome, might have excessive confidence due to this bias about how predictable the outcome was.

Another factor in this study that tends to inflate the estimates is that 6/8 of the studies included medical errors from prior admissions or outpatient care in their analysis, which could potentially lead to an overestimation of the number of preventable deaths due to care in the hospitalization. Only one study tried to separate out the two, and found that 25% of preventable deaths were related to prior outpatient events. On the other hand, I’d argue that a medical error is a medical error, regardless of when it happened. If a doctor made an error that harmed the patient in the outpatient setting and the patient died in the hospital after being admitted for the harm caused by that error, that’s still a death due to medical error.

There was also an interesting quirk:

A limitation of our study is also the limited geographic representation due to a lack of studies from the USA. The eight studies included in the meta-analysis are from Europe and Canada. The three studies from the USA were not included in the meta-analysis since they used selected cohorts of patients with an oversampling of specific conditions, and thus per protocol were not pooled with studies of consecutive or randomly selected cohorts.

Why do American studies use a selected cohort methodology that oversamples specific conditions, instead of an approach that’s more directly applicable to coming up with good estimates of preventable hospital mortality? Who knows? (Maybe someone out there does.)

Implications of a lower estimate of medical errors

The bottom line is that, if this study is an accurate reflection of the true number of preventable deaths due to medical error (and I think it’s very good), only around 7,150 people who were previously healthy die preventable deaths from medical error, and the vast majority of such deaths occur in people expected not to live more than three months. We’re talking estimates less than an order of magnitude smaller than the “one third of all deaths” trope. This has implications. For instance:

“We still have work to do, but statements like ‘the number of people who die unnecessarily in hospitals is equal to a jumbo jet crash every day’ are clearly exaggerated,” said corresponding author Benjamin Rodwin, assistant professor of internal medicine at Yale.

More importantly, after agreeing that recent high estimates of preventable deaths are not plausible and that only a small fraction of hospital deaths are preventable, undermine the credibility of the patient safety movement, and divert attention from other important patient safety priorities, Rodwin et al write:

Another important implication of our study relates to the use of hospital mortality rates as quality measures. Overall hospital mortality rates and disease-specific mortality rates continue to be reported in many countries in Europe and the USA.32, 33 In the USA, overall hospital mortality rates are reported by the Veterans Health Administration and disease and procedure-specific mortality rates are used by the Centers for Medicare and Medicaid Services (CMS). Disease-specific mortality rates are also used to determine hospital reimbursement as part of CMS’ Hospital Value-Based Purchasing Program. Our results show that the large majority of inpatient deaths are not due to preventable medical error. Given this finding, variation in hospital mortality rates is more likely due to variation in disease severity and non-disease-related factors that affect the location of a patient’s death. Although disease severity is taken into account through the reporting of adjusted mortality rates, numerous critiques have pointed out the limitations of this approach.34,35,36,37

Even if disease patterns and severity were uniform, however, there would likely be variation in hospital mortality rates because of variation in the use of hospitals at the end of life.28, 37 If it is assumed that the vast majority of hospital deaths are unavoidable, then variation in inpatient mortality should be seen as a measure of where patients die, rather than whether they die. Numerous studies have found that many non-disease-related factors affect location of death, including referral to palliative care, home support, living situation, functional status, and patient and family preferences.38

Elsewhere, the authors note that in Norway there is no hospice system and therefore patents are often admitted for end-of-life care, an observation that surprised me. They further pointed out that this could be why the study from Norway that they included in their meta-analysis reported the lowest rate of preventable mortality. Patients admitted for hospice care were considered unpreventable deaths, and this diluted the percentage of preventable deaths, leading to lower percentages of preventable deaths compared to hospitals in countries with hospice systems. In other words—surprise! surprise!—hospital mortality rates are a poor measure of quality for inpatient hospital care.

More importantly, if we’re truly going to improve quality of care and patient safety, it’s important to focus our efforts where they will do the most good. To do that, we need accurate data. Innumerate and highly implausible estimates that result in the “third leading cause of death” trope credulously bandied about by the press and amplified by quacks are actually antithetical to improving quality of care.

Shares

Author

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.