Shares

Quality

As I’ve mentioned before, the single biggest difference between science-based medicine (SBM) and what I like to call pseudoscience-based medicine, namely the vast majority of what passes for “complementary and alternative medicine” (CAM) or “integrative medicine” is that SBM makes an active effort to improve. It seeks to improve efficacy of care by doing basic and clinical research. Then it seeks to improve the quality of care by applying the results of that research to patient care. Yes, the process is complicated and messy, and it frequently doesn’t progress as fast as we would like it to. Sometimes it goes down blind alleys or takes wrong turns, such as when a treatment is adopted too rapidly and determined later to be ineffective. Overall, however, improvement does occur, and it continues to occur. New treatments that work better are discovered. Old treatments that don’t work as well (or that don’t work at all) are abandoned.

There is, however a blurry line between what constitutes medical research and what constitutes quality improvement (QI). A couple of years ago, in one of those unexpected turns that a career can take, an opportunity presented itself for me to become co-director of a statewide quality improvement consortium for breast cancer care in my state. As I’ve alluded to before, it was a case of unexpectedly being in the right place at the right time, of seeing an opportunity and being willing to take it. How I ended up making quality improvement a large part of my career is unimportant. What is important is that it puts me in a unique position among all the other SBM contributors to discuss the interface between science and quality. (It’s also important that I lay down a disclaimer here that this post represents my opinion and my opinion alone; it does not represent the views of the QI with which I’m affiliated, my cancer center, or my university.) In particular, there are ethical considerations that are not obvious, apparently even to someone as brilliant as Steven Pinker, who Tweeted yesterday:

I was puzzled by this particular Tweet, which pointed to an article published over the weekend in the New York Times about a new $300 million outpatient surgery center at Memorial Sloan-Kettering Cancer Center, the Josie Robertson Surgery Center, that is touted as being the height of innovation. The article describes a wide variety of practices being tested that range from practices designed to produce a better patient experience, such as tracking badges that allow coordinators to find patients and their families wherever they are, to changes in procedures, such as trying to send patients home earlier, omitting the use of drains, and using those same tracking badges to monitor patient activity and quantify the distances patients walk after their surgery as a means of predicting who can go home sooner or later. It’s a perfect example of how the line between research and quality control can be a bit fuzzy. This appears to be the passage that irritated Pinkerton:

But this race to innovation, bioethicists say, has created a gray area. While federal regulations require researchers to obtain patient consent for participation in clinical trials for novel drugs and devices, hospitals can freely enact internal quality improvement exercises without consent — even if there might be consequences for patient care. Medical centers typically do not inform patients every time they use them to test some new health app, or nursing staff reduction, or data analysis technique — changes that may or may not ultimately benefit the patient’s health. “It is clearly a blurry space,” says Nancy Kass, a bioethics professor who is the deputy director for public health at the Johns Hopkins Berman Institute of Bioethics in Baltimore. “It doesn’t matter if it’s quality improvement or research. The questions we should be asking are: Should we be talking about it? What should we be telling patients about it? What do we know about it that makes us think that it works? What do we know about it to suggest that it is safe, or might be risky, or have some uncertainties?”

My first thought upon reading the passage that provoked Pinker’s criticism was: What does Pinker have against bioethicists? After all, there is nothing in Kass’ statements above that struck me as unreasonable or onerous, which is why I couldn’t for the life of me figure out how Pinker saw this actually rather bland statement of caution that points out how quality improvement efforts can sometimes cross over into a realm where they are not easily distinguishable from human subjects research (HSR) of the sort that requires approval by an institutional review board (IRB). It is a question that all of us involved in quality improvement efforts not infrequently grapple with. Indeed, Pinker’s Tweet reveals a shocking ignorance of some very basic aspects of medicine. In fact, so basic was it that a blog friend of mine pointed out the issue with Pinker’s Tweet:

And:

Why, indeed? Pinker seems to view annoying ethical considerations about what does and doesn’t constitute medical research and requiring the need for obtaining informed consent as being in “opposition” to “innovation” (whatever “innovation” means in this context, given that everyone and his grandmother claims what they’re doing is “innovation”). It’s also true that there has been and always will be a tension between testing new treatments and procedures and protecting patients from harm, but to look at all “innovation” of the sort being tested in the “ideas laboratory” of MSKCC’s outpatient surgical center as somehow inherently good, helping everyone and hurting no one, is unbelievably naïve. That’s even leaving aside privacy issues presented by systems such as the patient tracking system currently being tested. So what is the difference between research and quality improvement and what are the issues we frequently run into? Let’s take a look.

What is the difference between research and quality improvement?

“Research.” It’s a simple word. We all think we know what it means. However, it has a very specific meaning when we are discussing human subjects research. For the purpose of federal law, which regulates human subjects research and determines what activities are subject to IRB approval and supervision under the Common Rule. Given that most human subjects research in medicine falls under the purview of the Department of Health and Human Services (HHS), it’s instructive to head over to its website and look under 45 CFR §46.102:

(d) Research means a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge. Activities which meet this definition constitute research for purposes of this policy, whether or not they are conducted or supported under a program which is considered research for other purposes. For example, some demonstration and service programs may include research activities.

And a human subject is:

(f) Human subject means a living individual about whom an investigator (whether professional or student) conducting research obtains

  1. Data through intervention or interaction with the individual, or
  2. Identifiable private information.

This is (or at least was) a fairly reasonable definition of human subjects research, although. It’s designed to develop or produce generalizable knowledge. It’s also, unfortunately, almost by necessity somewhat vague in that it’s not always easy to tell what activities are or are not designed to produce generalizable knowledge. Testing a new drug or surgical procedure in a randomized clinical trial is undeniably research under this definition. But what about looking at outcomes of patients treated with different procedures? That’s probably research but it might not be.

Perhaps it will help to contrast quality improvement with research. The American College of Surgeons published a guide with a reasonable differentiation between the two, noting succinctly (which is why I’m quoting it) that research “is carried out to add to the profession’s understanding of surgical conditions and disease in contrast to altering or comparing established or already validated treatment options.” It also notes that QI “is more often designed to study whether an accepted norm or behavior is being conducted locally and, if not, to modify the actions of the personnel involved so as to approach these norms.” In other words, QI is designed to test whether accepted standards of care are being followed, often at a local, single hospital level, and then to initiate processes to decrease the amount of care that is delivered that is not concordant with evidence-based guidelines:

Although QI and HSR often overlap, there are several fundamental differences between the two. First, as already noted, HSR involves generating or contributing to generalizable knowledge, whereas QI attempts to improve a program or service or align current treatment with established best practices and evidence-based medicine. HSR often involves randomization of patients, whereas QI projects typically do not randomize to various treatment arms and more often subject the entire population to a system or policy change, often tracked over time. Whereas HSR generates findings that might affect future policies, QI findings intentionally address current standards or protocols and attempt to change the policies and standards for subsequent patients or encounters. HSR is rooted in identifying a subset of patients to study in the most controlled manner, often using strict inclusion and exclusion criteria to define the population of interest. Conversely, QI is typically all-encompassing for all patients who may have a disease, undergo a procedure, or interact with a specific aspect of the health care system. Exclusion of specific populations or creating exceptions from the algorithm may subvert the QI effort.

Another difference is that human subjects participating in HSR are not guaranteed to benefit. Indeed, they might be harmed, hence the need for truly informed consent that includes a detailed discussion of the risks and potential benefits. In contrast, QI efforts are designed to produce a clear benefit to subsequent patients in terms of safety, efficacy, efficiency, patient satisfaction, quality, or other concretely measurable outcome in the near term. Also, unlike HSR, which usually has a rigid protocol, pre-defined accrual goals, and a defined time frame, QI can be (and should be) continuous. In addition, HSR is basically defined to be published in the scientific literature. QI, on the other hand, frequently remains unpublished and is used internally. That doesn’t mean QI work isn’t published, but usually it’s examples of protocols or measurements of how well a hospital or group of hospitals is adhering to evidence-based guidelines. Finally, QI generally only looks at routine patient care information and does not manipulate patient care, testing one method against another. Here’s a useful decision tree:

Decision tree

Children’s Hospital of Philadelphia also publishes a useful worksheet for differentiating between the two, as do Stanford University and the University of Wisconsin.

So what’s the problem?

The main distinguishing characteristic of QI compared to HSR is its emphasis on standards and testing whether what we as physicians and other health care professionals do when we care for our patients is concordant with currently accepted evidence-based practices and then continuously changing our practices to encourage concordance with science-based medicine guidelines that can be measured with concrete metrics and discourage practices clearly contrary to such guidelines. The devil, as always, is in the details, particularly how adherence to science- and evidence-based guidelines are assessed and how QI corrective measures are implemented in response to findings. When I first signed on to the QI of which I’m now co-director, it was as the clinical champion for my institution, one of 25 others. I represented no more than my institution. Yet, I had to spearhead putting together an IRB application in order for our cancer center to be a part of the QI because, just to be safe, the QI required this of all its member institutions. It’s a requirement that has since been dropped, having been included early on “just to be safe.” In any case, requiring IRB approval for every QI project would definitely inhibit QI activities. On the other hand, it’s definitely a mistake to assume that all QI processes are benign, that all QI processes produce only benefit and do not have the potential to cause harm.

It’s also a mistake to assume that what is going on at MSKCC is only QI. Let’s just put it this way. “Innovation” is not synonymous with quality improvement, contrary to Pinker’s seeming assumption in his reaction to this NYT article. Indeed, almost by definition, “innovation” implies some form of research to test whether that innovation produces better outcomes or better measurable metrics than the old practices did. Certainly, some of what MSKCC is doing doesn’t sound like research. For instance, on its surface the hospital check-in process in and of itself does not sound as though it is HSR in that it’s not really being implemented to improve outcomes or adherence to evidence-based guidelines but rather to make things easier for patients and their families. That, of course, is not the sort of issue that concerns me.

This is:

For instance, the surgery center has done away with some standard medical practices — such as having a designated postoperative recovery unit where specialized nurses monitor patients coming out of anesthesia. Instead, patients will go directly from surgery to private rooms where cross-trained nurses will monitor their recovery from anesthesia.

“The leadership saw this as an opportunity to be a little bit distant from the big-box academic medical center, to test out new work flows, to test out new technology,” Dr. Simon says. “It’s a learning lab for new systems.”

So why is this not a QI project? Because it’s not standard of care. It’s something new, something different from what is normally done. While there are evidence-based guidelines for recovery from anesthesia, there are not agreed-upon evidence-based guidelines regarding such a facility. Moreover, it is not at all unreasonable to be concerned that such a facility could cause harm to some patients. In a private room, unless nurses are assigned to one-on-one coverage (which would be very expensive) it’s more difficult to monitor a patient, and one does have to wonder whether cross-trained nurses will be as adept at recognizing subtle warning signs of trouble in recovering patients that PACU nurses can recognize. Indeed, the leadership even appears implicitly to acknowledge that this is research, testing out “new work flows,” “new technology,” and “new systems.” From what Dr. Simon says, it almost sounds as though the intent of this new outpatient surgery center is to be free from such pesky considerations of whether their tinkering constitutes research and the possibility that someone will demand IRB approval.

Here’s another example:

For instance, Memorial Sloan Kettering administrators have data indicating that elderly patients in the quality improvement program had a higher chance of not being ready to go home the day after their surgeries. At the new center, doctors will study whether additional measures, such as geriatric consults for patients over 75, will improve their chances of a shorter stay.

I’m sorry. This is HSR, plain and simple, by the HHS definition. Clearly, MSKCC is seeking to develop generalizable knowledge about a specific question here. It’s not even really a gray area. Similarly, circling back to those tracking badges, so is this:

For one thing, administrators intend to update the traditional practice of asking patients to walk around soon after surgery. They say they plan to use patients’ locator badges as activity monitors, allowing medical teams to quantify and analyze the distances patients walk. It is a step that may make some patients feel more in control of their recovery — while others may feel more burdened by the added surveillance.

“We don’t know what the data means, because no one has ever measured it before,” Dr. Brett A. Simon, an anesthesiologist who is the director of the surgery center, told me in an interview this month at the new building. Still, he hopes the novel data might eventually be used as a benchmark to help distinguish patients who are recovering on schedule from those who have pain or other symptoms that need to be managed.

“Maybe there’s a predictive value,” Dr. Simon says.

Or maybe, like billions of other data points collected by devices, the distance measurements will prove to be mere noise.

How anyone can view the question of whether the data collected by these tracking badges has predictive value of health outcomes, such as length of stay, as anything other than a research question is beyond me. Dr. Simon is talking about producing generalizable knowledge based on measurements from a new piece of technology; yet this is being justified by this:

So far, about 10,000 patients have gone through the program. But doctors typically do not tell patients that they have been selected for a more streamlined approach to surgical recovery, Dr. Simon says. That is because the actual surgery and medical treatments patients receive have not changed, just related practices.

If you’re going to determine whether a metric measured by new technology can predict whether a patient needs to stay in the hospital longer, that’s not just a “related practice.” That’s part of medical practice. Similarly, whether or not a drain is left in or not is not a “related practice.” It’s part of surgical practice, because the presence or absence of a drain can influence the rate of abscess formation in many surgical procedures.

The most important issue of all

The cardinal rules for any form of HSR include a respect for subject autonomy; clinical equipoise, namely the requirement that there be genuine uncertainty whether the options being tested are better or worse; and procedures at every step of the process to minimize risk to the human research subject. These requirements mandate regulations requiring informed consent, oversight by an ethics board (the IRB), and research designs that minimize the risk of harm to human research subjects. The reason that QI projects are exempt from IRB approval and these other standards is because they involve trying to bring every doctor and hospital up to the science- and evidence-based standard of care, rather than testing new interventions.

What it does not involve is this:

The Josie Robertson center is clearly introducing novel health care approaches that may significantly enhance its patients’ experiences. But given the fact that Memorial Sloan Kettering is a world-renowned institution that many other medical centers tend to follow, it seems remarkable that administrators there have yet to pioneer an equally innovative system for transparently communicating their improvement endeavors to patients.

Dr. Simon says he is working on it.

“We do want to communicate the things we are doing that are new and that improve the patient experience without making them feel alarmed or that they are experimental animals,” Dr. Simon said. “I’m not sure we have 100 percent figured out how to do that.”

This is what I call putting the cart before the horse. This is the sort of thing that should be figured out before implementation, particularly when these sorts of initiatives are coming from a health care institution as revered as MSKCC.

It also doesn’t help that marketing intrudes in a big way in these sorts of programs. As the NYT article clearly shows, much of the reason for the existence of MSKCC’s outpatient surgical center and its high tech initiatives stems not from a medical need, but rather from a need to outcompete its rivals for attracting patients. Given how hypercompetitive an environment medicine in Manhattan has become, this is not surprising. The same thing is happening all over the country, albeit perhaps with not quite the same intensity.

Process improvement is important. Quality improvement in health care is also important. However, protection of patients and any patient who becomes, whether he knows it or not, a human subject in human subjects research trumps both of these. Clearly Steve Pinker does not understand this simple concept. In the meantime, efforts are under way to update the Common Rule for the 21st century. Unfortunately, for purposes of this discussion, the proposed changes deal more with streamlining approvals for minimal risk research and the like than they do with more carefully delineating the difference between QI and HSR. Be that as it may, no one wants to hinder QI efforts unduly, but it has to be remembered that it is real people who can suffer real harm from QI efforts that backfire. This is particularly true, given how much efforts like the outpatient surgery center at MSKCC are driven more by marketing than actual QI or science.

The line between quality improvement and medical research might be a fine one, but it’s not so fine that “innovation” should ever trump protecting patient autonomy and safety. Steve Pinker’s attitude is an all too common one that naïvely views all “innovation” as good for patients and seems oblivious that innovation means little if it isn’t tested in medical research to see if it actually improves patient care.

 

 

Shares

Author

Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.