The topic of conflicts of interest among medical researchers has recently bubbled up to the public consciousness more than usual. The catalyst for this most recent round of criticism by the press and navel-gazing by researchers is the investigation of Senator Charles Grassley (R-IA) of nine psychiatric researchers, one of which held $6 million in stock in a company formed to bring a drug for depression to market, but had allegedly concealed this, even though he was an investigator on an NIH grant to study the drug he was developing. From my perspective, there is more than a little politics going on in this story, given that for the last decade federal law, specifically the Bayh-Dole Act, and policy have actually encouraged investigators and universities to co-develop drugs and treatments with industry, but it does bring into focus the issue of conflicts of interest, in particular undisclosed conflicts of interest. There are two articles of note that recently appeared in the scientific literature discussing this issue, one in Science in July (about the Grassley investigation) and an editorial in the Journal of Psychiatry and Neuroscience by Simon N. Young, PhD, the Co-Editor-in-Chief of the journal and faculty at McGill University. I was more interested in the latter article because it takes a much braoder view of the issue. Science-based medicine (SBM) depends upon the integrity of the science being done to justify treatments; so it’s useful to discuss how conflicts of interest intersect medical research.

In most public discussions of conflicts of interest (COIs), Young notes, the primary focus is on payments by pharmaceutical companies to investigators. Make no mistake, this is a big issue, but COIs are not just payments from drug companies. Indeed, I’ve written about just such COIs that have arguably impacted patient care negatively right her on this very blog, for example seeding trials (in which clinical trials are designed by the marketing division of pharmaceutical companies), a case of fraud that appeared to have been motivated by COIs. What needs to be understood is that every single scientific and medical investigators have COIs of one sort or another, and many are not financial. That’s why I like Young’s introduction to what COIs are:

A COI occurs when individuals’ personal interests are in conflict with their professional obligations. Often this means that someone will profit personally from decisions made in his or her professional role. The personal profit is not necessarily monetary; it could be progress toward the personal goals of the individual or organization, for example the success of a journal for a publisher or editor or the acceptance of ideas for a researcher. The concern is that a COI may bias behaviour, and it is the potential for bias that makes COIs so important. Before getting into the specifics of COIs, I will describe some of the research on the biases we all have, the evidence that we are not always aware of our own biases, how biases can be created by vested interests and how people behave in response to revela- tions of COIs. The idea that scientists are objective seekers of truth is a pleasing fiction, but counterproductive in so far as it can lessen vigilance against bias.

Oddly enough, financial COIs are probably the easiest to deal with as far as a practical matter. Here, transparency is key, because it’s not necessarily COIs per se that can destroy the credibility of a study. Rather it’s the COIs that are not disclosed (henceforth referred to as undisclosed COIs, or UCOIs) that cause the most problems. If I read a paper and in the paper it is disclosed that the first author received funding from the pharmaceutical company that makes the drug being studied, that in and of itself does not invalidate the study, the simplistically nonsensical arguments of, for example, Dr. Jay Gordon notwithstanding, who of late has been claiming that the promotion of H1N1 flu vaccination is far more due to pharmaceutical companies’ drive for profits than it is based on scientific and public health considerations and who likens vaccine manufacturers to tobacco manufacturers promoting junk science. Good science trumps COIs, and, in spite of how badly they are often castigated and some of their misdeeds in the past, pharmaceutical companies do fund a lot of good science. That being said, pharma funding of a study does make me look at it more skeptically than I would if it were funded by another source, mainly because there is a direct financial incentive, often with hundreds of millions of dollars invested in the development of a drug on the line, to have a good result. For these COIs, daylight is the best remedy. If readers don’t know about a financial COI, they can’t judge how important that COI is or is not or how it might impact a study.

On the other hand, Young points out that, while disclosure is the primary means by which scientific organizations and journals deal with COIs, this is not a bullet-proof solution. True, there is evidence that disclosure does work to have the intended effect, and Young cites a study in which two manuscripts were given to two groups of reviewers to read, one to each group. The manuscripts’ content was identical, but one listed COIs and the other did not. The results showed that those reading the manuscripts with the disclosed COIs found the study reported in the manuscript to be “less interesting and less important.” However, it turns out that disclosure may not always be effective due to paradoxical effects. First, authors who declare a COI may feel that their declaration “frees” them to be less objective and argue their point more strongly. Also, authors may feel that the weight of the declared COI makes it necessary for them to exaggerate the significance of their results in order to overcome any additional skepticism in the reader provoked by the COI. Finally, readers may actually be influenced by statements and information they know they should ignore, the COI disclosure notwithstanding.

Far more difficult to quantify are non-financial COIs. Let’s start with one COI that each and every researcher can be reasonably assumed to have, and that’s a strong desire to have one’s ideas and hypotheses validated by science and accepted by the scientific community. I originally became a scientist, as well as a physician, in order to study cancer and to develop more effective therapies for cancer than what we currently have. To achieve that goal, I have to develop hypotheses that accurately describe a phenomenon and make useful predictions. I can see how easy it is to become emotionally attached to my preferred hypothesis, especially since I can see that if I’m right I will have a rare chance to improve significantly the care of breast cancer patients. If my hypothesis is wrong it could well be back to the drawing board for one of the two projects that my lab works on, my career half over and not a lot of time to make a mark. (Bringing an idea to fruition in terms of treatment can easily take 10 or 20 years, and I probably have about 20 years left in my career if I stay healthy and productive.) Add to that my desire to be perceived as a good researcher and for my strategy for treating breast cancer that I’m working on (but have not yet published on) to take hold, and it’s easy to see that I have to be very, very careful not to let such considerations sway me. For a scientist, few things are more painful than to see a cherished hypothesis fall under the weight of opposing evidence.

I recently wrote about how a very common procedure for osteoporotic vertebral fractures, vertebroplasty, had recently been subjected to two significant clinical trials, one from the U.S. and one from Australia, both of which found vertebroplasty to be no more efficacious at relieving pain than placebo. Indeed, I even likened it to acupuncture and briefly recapped its history, in which small, lower quality studies (unblinded, no controls, or other methodological shortcomings), along with physicians’ anecdotal experience, led to the perception that vertebroplasty is efficacious. Consider this quote I included in my post from one of the early boosters of vertebroplasty, Dr. Joshua A. Hirsch, director of interventional neuroradiology at Massachusetts General Hospital in Boston:

“I adore my patients,” Dr. Hirsch added, “and it hurts me that they suffer, to the point that I come in on my days off to do these procedures.”

Never underestimate the desire of physician-scientists and physician-investigators to help their patients. Trust me, we get a rush when we can dramatically help a patient. Scientists get the same rush when their hypotheses are confirmed. It’s one of the rewards that motivate us. In Dr. Hirsch’s case, he even declared that he “believed in clinical trials” but that he really believed that vertebroplasty worked.

Here’s where basic human psychological considerations then start to mix with more tangible COIs. Let me use myself as an example If my hypothesis is not falsified, then there’s the potential for more publications, more grants, and much more prominence in the breast cancer research community than I currently have, which, admittedly, is relatively modest. More importantly, if my hypothesis is correct, the results of my research might improve the survival and ease the suffering of millions of women with breast cancer. Here, emotional attachment to a hypothesis can merge with other intangible COIs (fame, prominence, the respect of one’s peers) and tangible COIs (more grants, more publications, promotions, and awards). Any researcher who claims that these things don’t sway their decisions and thinking or don’t contribute to bias is either self-deluded or lying. Indeed, this pride of ownership of their own research can lead to overselling it or to downplaying its shortcomings:

This presumably is responsible for the fact that when authors were interviewed about their published papers “important weaknesses were often admitted on direct questioning but were not included in the published article.”46 Certainly editors are used to ask- ing authors to mention the limitations of their studies and to be more cautious about the implications of the research. Another related factor is the desire for researchers to advance their careers and get recognition from their peers. Research suggests that social and monetary reward may work through both psychological47 and neuroanatomical processes48 that overlap to some extent. The big difference in relation to COIs is that social rewards, unlike monetary rewards, cannot be disclosed in any meaningful way.

Indeed they can’t, even though scientists are human after all, and therefore subject to the same human needs as any other.

Another point that comes up in behavioral research about COIs is that human beings do not know their own minds very well. They think they do, but they really do not, which may account for just how vehemently so many researchers deny that financial support from drug companies or elsewhere affects their decisions. Ask yourself how many times you’ve heard a researcher claim that, yes, he has a COI but he would never, ever be influenced by that. He can remain objective in spite fo that. As Young summarizes:

A recent short review in Science asks how well people know their own minds and concludes the answer is not very well.3 This is because “In real life, people do not realize that their self-knowledge is a construction, and fail to recognize that they possess a vast adaptive unconscious that operates out of their conscious awareness.” Wilson and Brekke4 reviewed some of the unwanted influences on judgments and evaluations. They concluded that people find it difficult to avoid unwanted responses because of mental processing that is unconscious or uncontrollable. Moore and Loewenstein5 argue that “the automatic nature of self-interest gives it a primal power to influence judgment and makes it difficult for people to understand its influence on their judgment, let alone eradicate its influence.” They also point out that in contrast to self-interest, understanding one’s ethical and professional obligations involves a more thoughtful process. The in- volvement of different cognitive processes may make it difficult to reconcile self-interest and obligations. MacCoun,6 in an extensive review, examined the experimental evidence about bias in the interpretation and use of research results. He also discussed the evidence and theories concerning the cognitive and motivational mechanisms that produce bias. He concluded that people assume that their own views are objective and “that subjectivity (e.g., due to personal ideology) is the most likely explanation for their opponents’ conflicting perceptions.”

In other words, when it comes to COIs, we as human beings are in general very, very poor at judging how much we are being influenced by such considerations, because self-interest is primal and functions at very basic and largely unconcious areas of our minds. Again, this is why so many scientists will deny that they are being influenced pharma funding and why physicians will vehemently deny that their prescribing choices are influenced by gifts received from drug companies. They really, really believe it, too. They’re not lying. They’re self-image is that they are rational and that they can separate these COIs from their scientific and medical decision-making process, but behavioral research would argue otherwise.

So does all of this mean that the “complementary and alternative medicine” (CAM) crowd or the quacks of the world are right? Is science-based medicine so hopelessly compromised by COIs and bias that it can be discounted in favor of fairy dust like homeopathy, reiki, and other magic? Of course not. Reality is reality, science is science, and evidence trumps all, even COIs. Here, I think it’s instructive to contrast how science-based medicine is dealing with COIs now with how CAM advocates deal with them. However halting, messy, and at times inadequate science-based medicine deals with COIs, it is as nothing compared to how poorly CAM advocates deal with them. Let’s contrast how CAM advocates and pseudoscientists deal with COIs. Basically, they don’t. To them, COIs only matter if the person with the COI is someone they don’t like. The aforementioned Dr. Gordon likes to state over and over again that if a study is funded by a drug company he doesn’t believe it. I’ve tried over five years to convince him that this is poppycock. Yet, when it comes to COIs, few can match CAM researchers.

Indeed, as long as the investigator or physician with a COI is on the “right” side, COIs can be completely overlooked. For example, the patron saint of the anti-vaccine and autism quackery movement, Dr. Andrew Wakefield, not only received large amounts of money from trial lawyers before doing his “research” in 1998 that sparked the MMR scare but he had filed for a patent on a vaccine to compete with the existing MMR, as investigative journalist Brian Deer has reported. The result was incompetent and possibly even fraudulent research. Yet the anti-vaccine crank blog Age of Autism circles the wagons around Wakefield whenever he is criticized and even awarded him its “Galileo Award” for 2008, where Mark Blaxill likened the “inquisition” against him for his COIs and incompetent research to the Inquisition that ultimately forced Galileo to recant, likening the scientific complaints against Wakefield for his gross incompetence and undisclosed COIs to a “religious war,” writing this gag-inducing bit of desecration of Galileo’s memory:

I wouldn’t in any way diminish the importance of Galileo, but in an interesting way, Wakefield’s steadfastness in the face of adversity outshines the man in whose name we honor him. For, although Galileo finally agreed to recant his support for heliocentrism, Wakefield has never buckled under the pressure. Instead he has stuck to his guns and continued to fight for families with autism.

I apologize for that one and ask for your forgiveness if you now find yourself praying to the porcelain god, disgorging the contents of your upper GI tract as an offering. Meanwhile, at the risk of causing a repeat trip to worship, I will point out that just last month the granddaddy of all anti-vaccine groups, with the wonderfully Orwellian name National Vaccine Information Center, gave Wakefield its Humanitarian Award. What can we conclude from this? Basically, in SBM UCOIs, once discovered, usually get you castigated, particularly if there is even a whiff of fraud. In quackery land, they get you awards, regardless of whether there is fraud. There are numerous other examples of “alternative medicine” practitioners with nearly as massive COIs.

Clearly, there are at least two huge differences between pseudoscience and quackery versus SBM. In SBM, scientists try very hard to falsify their hypotheses in order to test their validity; in contrast, among pseudoscientists and quacks, rarely do “investigators” actually “test” anything. Rather they look for confirmatory evidence to support their beliefs and view themselves as virtuous underdogs fighting for their patients for having done so, even as they subject them to useless or even harmful quackery. More telling is the reaction to COIs. This is where the hypocrisy of CAM supporters comes into full relief. AoA bloggers will blast, for instance, Paul Offit as Dr. PrOffit, and cranks like Robert F. Kennedy, Jr. will call him a “biostitute” for having received royalties from a drug company for the vaccine his lab invented. They’ll blast scientific journals for accepting too much advertising from drug companies. Yet, on AoA itself are numerous ads for compounding pharmacies, supplement sellers, gluten-free diets, and all manner of other unproven “treatments” for autism. Another example, Dr. Joseph Mercola, routinely castigates SBM for COIs while at the same time selling all manner of supplements and woo on his website. To such as J.B. Handley and his merry band of anti-vaccine cranks at Generation Rescue and AoA or Dr. Mercola, it’s only a COI if they say it is. Indeed, the right kind of COI can even make you a hero in their eyes, and that’s not even counting the positive reinforcement given someone like Wakefield by legions of adoring anti-vaccinationists.

The bottom line is that COIs do matter. Because science is a human endeavor, it will never be perfectly pristine, because nothing humans do is perfectly pristine. Moreover, SBM hasn’t always understood or handled COIs, both disclosed and undisclosed, real or perceived, very well, to the point where new regulations by the government may well be necessary. Moreover, there are always more intangible COIs, such as pride of ownership of research, the desire to be proven right, and the respect of one’s peers that, unlike financial COIs, can’t be quantified. As Young says in his article, the “objective of a literature relatively free of bias remains a pious but distant hope.”

Even so, I’ll paraphrase Winston Churchill’s (in)famous comment about democracy in describing SBM by saying that science-based medicine is the worst form of medicine, except for all those others that have been tried. In particular, that includes dogma-based medicine or anecdote-based medicine, two dominant forms of medicine that have been practiced since the days of ancient Egyptian physician-priests to ancient shaman medicine men through the days of barber-surgeons using bleeding as a treatment for almost everything to the physician of 200 years ago advocating purging and treatments with toxic metals like cadmium and antimony. Progress in medicine was glacial, with few advances over decades or even centuries, until science was seriously applied to medical investigation in a serious and systematic way beginning in the 19th century and exploding in the 20th. The last 50 years have seen incredible advances in medical care, thanks to science.

True, SBM is not perfect. Financial interests, COIs, and the pride of individual practitioners undermine it, and there are a depressing number of ills that it offers too little for. But it’s so much better than any alternative we have tried before. It works, and, although it does so in fits and starts, sometimes all too slowly, it’s getting better all the time. Dealing more effectively with COIs will only help it to continue to do so.


Posted by David Gorski

Dr. Gorski's full information can be found here, along with information for patients. David H. Gorski, MD, PhD, FACS is a surgical oncologist at the Barbara Ann Karmanos Cancer Institute specializing in breast cancer surgery, where he also serves as the American College of Surgeons Committee on Cancer Liaison Physician as well as an Associate Professor of Surgery and member of the faculty of the Graduate Program in Cancer Biology at Wayne State University. If you are a potential patient and found this page through a Google search, please check out Dr. Gorski's biographical information, disclaimers regarding his writings, and notice to patients here.