I am formally requesting that Cancer retract an article claiming that psychotherapy delays recurrence and extends survival time for breast cancer patients. Regardless of whether I succeed in getting a retraction, I hope I will prompt other efforts to retract such articles. My letter appears later in this post.

In seeking retraction, I cite the standards of the Committee on Publication Ethics (COPE) for retraction. Claims in the article are not borne out in simple analyses that were not provided in the article, but should have been. The authors instead took refuge in inappropriate multivariate analyses that have a high likelihood of being spurious and of capitalizing on chance.

The article exemplifies a much larger problem. Claims about innovative cancer treatments are often unsubstantiated, hyped, lacking in a plausible mechanism, or are simply voodoo science. We don’t have to go to dubious websites to find evidence of this. All we have to do is search the peer-reviewed literature with Google Scholar or PubMed. Try looking up therapeutic touch (TT).


I uncovered unsubstantiated claims and implausible mechanisms that persisted after peer review in another blog post about the respected, high journal-impact-factor (JIF = 18.03) Journal of Clinical Oncology. We obviously cannot depend on the peer review processes to filter out this misinformation. The Science-Based Medicine blog provides tools and cultivates skepticism not only in laypersons, but in professionals, including, hopefully, reviewers who seem to have deficiencies in both. However, we need to be alert to opportunities not just to educate, but to directly challenge and remove bad science from the literature.

A brief history

You may recall my previously blogging about the article that I am now requesting be retracted. I described how the authors unsuccessfully attempted to block publication of a criticism of their study. They then refused to respond when the criticism was published. The article spun faulty analyses and avoided simple analyses that showed that giving cancer patients breathing exercises and encouragement of healthy behaviors does not forestall recurrence or extend their lives.

But earlier papers from the same project did some spinning as well. For instance, one paper tested the more reasonable hypothesis that psychological interventions can reduce emotional distress, improve health behaviors and dose-intensity, and enhance immune responses.

There is no evidence that any changes in the immune parameters that were studied would be clinically significant or that there exists any plausible mechanisms by which recurrence and survival might be affected. But why wouldn’t we expect a support group intervention to affect mood and health behavior in the way that was hypothesized?

The abstract claimed uniformly positive results in terms of effects on anxiety and improved dietary habits, smoking, and adherence, with no negative results mentioned. However — and here I draw on my past blog post — the authors cast a wide net, with the methods section revealing assessment of at least:

  • 9 measures of mood (there is evidence from other articles that there are even more measures of mood assessed that were not reported in this article);
  • 8 measures of health behavior;
  • 4 measures of adherence;
  • 15 measures of immune function.

Turning to the actual results, only 1 of the 9 measures of mood was significant in time-by-treatment interactions. The intervention seemed to have had a significant effect on “dietary behavior” (although it is unclear why the results for very different individual dietary behaviors were not individually provided) and smoking, but no effect on exercise. As is often the case with early breast cancer patients, rates of adherence to chemotherapy were too high to allow any differences between the intervention and control groups to emerge.

In terms of immune function, results were not significant for counts of CD3, CD4, or CD8 cells, or six assays of natural killer cell lysis.

These overall results suggest that what we were told in the abstract represents a gross confirmatory bias — the suppression of negative results and the highlighting of positive ones likely to be due to chance.

Claims about biomedical outcomes such as recurrence and death need to be evaluated as such. It is unlikely that claims about a new form of chemotherapy or radiotherapy would be accepted if they were based on faulty post-hoc analyses of a study with such a small sample size, and with results contradicted by more straightforward, appropriate analyses. But for a variety of reasons, claims about psychological interventions improving survival are often given special consideration not only in the media, but in scientific journals.

Let us get real. Given the effectiveness of current treatments for early breast cancer, an effort to demonstrate an improvement over results obtained in guideline-consistent routine care would have to involve thousands of patients, not the 227 recruited for this study. In the U.S. the 5-year survival rate for women with localized breast cancer is now 98.5%. And then of course there is the lack of plausible biological mechanism by which group therapy could slow recurrence and extend survival.

Mind over cancer

The idea that the power of mind can triumph over physical illness is deeply rooted in Western folk culture and has its roots in classical Greek and Roman thinking. Yet, we showed in a systematic review of the literature, that “No randomized trial designed with survival as a primary endpoint and in which psychotherapy was not confounded with medical care has yielded a positive effect.”
Investigators who had undertaken ambitious, well-designed trials to test the efficacy of psychosocial interventions but obtained null results echoed our assessment with reviews declaring “Letting Go of Hope” and “Time to Move on.”

I provided an extensive review of the literature concerning whether psychotherapy and support groups increased survival time in an earlier blog post. Hasn’t the issue of mind-over-cancer been laid to rest? I was recently contacted by a science journalist interested in writing an article about this controversy. After a long discussion, he concluded that the issue was settled — no effect had been found — and he would not succeed in pitching his idea for an article to a quality magazine.

But hold on.

Considering the question open is vital to continued federal funding of the field of psychoneuroimmunology (PNI). Without having established that psychological interventions actually improve survival of cancer, PNI seeks to explain how these interventions succeed. It is classic tooth fairy science.


The NCI in particular provides financial support to efforts to promote psychoneuroimmunology in articles in special issues of journals and symposia that exclude any anyone who might express skepticism. As an example, see an extraordinary 2013 article, “Psychoneuroimmunology and cancer: a decade of discovery, paradigm shifts, and methodological innovations published by McDonald, O’Connell & Lutgendorf in Brain, Behavior, and Immunity (30, S1-S9).
The article celebrated a decade of NCI support for psychoneuroimmunology, starting with a 2002 conference and resulting 2003 monograph. The three authors were a NCI program officer, a science writer paid by NCI, and a proponent of Therapeutic Touch. The 2013 article introduced a collection of carefully selected invited reviews:

This [2003] seminal volume captured state-of-the-science reviews and commentaries by leading experts in psychoneuroimmunology (PNI) and served as a catalyst for biobehavioral1 research conducted in a cancer context. In the decade prior to the NCI commissioned supplement, Brain, Behavior, and Immunity published only 12 cancer-relevant articles. Since the 2003 supplement, the journal has featured 128 cancer-relevant papers that have generated 3361 citations (data from SCOPUS, retrieved November 1, 2012), relative to 55 papers on PNI and cancer, published in other peer review journals during the same time period. These bibliometric data highlight Brain, Behavior, and Immunity as a leading scholarly outlet for research on the biology of psychological and social experiences and the integrated mechanisms associated with cancer as a complex disease process. The current volume celebrates the 10-year anniversary of the 2003 supplement. This collection of invited reviews and research articles captures important discoveries, paradigm shifts, and methodological innovations that have emerged in the past decade to advance mechanistic and translational understanding of biobehavioral influences on tumor biology, cancer treatment-related sequelae, and cancer outcomes.

This paragraph can be interpreted in different ways. B, B, & I’s editorial board is a tight club of PNI researchers , many of those with NIH funding. The journal rarely has an article without positive findings and seldom is seen a skeptical word about PNI. The journal could represent the success of NIH funding for PNI research, particularly NCI, or the poor reception of PNI research in the larger scientific literature. Outside the incubator, PNI mostly does not survive peer review.

In 2014, a NCI program officer placed solicitations on a listserv for free training in Cancer to Health (C2H), supported by a NCI R-25 grant. The evidence mustered for the R-25 application is the project from which the Cancer article came.

At a 2014 meeting of the Society Behavioral Medicine, the NCI supported a symposium with a NCI program officer as a discussant. The author of the article for which I have requested a retraction renewed her claims of the powers of her intervention and solicited applications from professionals interested in free training with support from the NCI R-25 training grant.

In August 2014, the author of the Cancer article will fly to Groningen, the Netherlands, with another NCI program officer and again present her claims at the International Congress of Behavioral Medicine.

For political reasons, not science, the NIH is interested in keeping this line of research alive, even if it is on life support. The idea is that behavioral interventions can extend life, not just improve its quality. And to study exactly how requires not only investigator-initiated R01s but larger program grants. When the author of the Cancer article refused to respond to our critique, the task was accepted by Peter Kaufmann. He is the Deputy Chief of the Clinical Applications and Prevention Branch of National Heart Lung and Blood Institute (NHLBI) and at the time his commentary was written, President of the Society of Behavioral Medicine.

As I note in my earlier post:


Subtitling his commentary “To Light a Candle,” Kaufmann conceded that my colleagues and I had raised valid criticisms about the design and interpretation of the C2H intervention trial. However, he took issue with our recommendation that clinical trials of this kind be suspended until putative mechanisms could be established by which psychological variables could influence survival. Quoting our statement that an adequately powered trial would require “huge investments of time, money, and professional and patient resources,” he nonetheless called for dropping a “preoccupation with mechanisms and secondary aims,” and instead putting the resources to increasing the sample size and quality of an intervention trial.

Wow, so we should ignore the lack of evidence for biologically-plausible mechanisms. Kaufmann suggests we should pour what would have to be millions of dollars into a trial to test whether psychotherapy and support groups extend the lives of cancer patients.

The call for retraction

May 7, 2014

Fadlo R. Khuri, MD, FACP
Editor-in-Chief, Cancer

Dear Dr. Khuri:

This open letter formally requests retraction of a 2008 Cancer article

Andersen, B. L., Yang, H. C., Farrar, W. B., Golden-Kreutz, D. M., Emery, C. F., Thornton, L. M., … & Carson, W. E. (2008). Psychologic intervention improves survival for breast cancer patients. Cancer, 113(12), 3450–3458.

According to the Committee on Publication Ethics (COPE;, journal editors should consider retracting an article if

they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabri­cation) or honest error (e.g. miscalculation or experimental error)

I call your attention to the basic data reported in the flow chart in Figure 2 of the cited article. Analyses that can be readily performed but that are not provided in the article directly contradict the claims that are stated in the title, get amplified in the abstract, and are repeated throughout the text. When simple 2×2 chi-square calculations are performed on raw recurrence and death events from the figure for intervention versus control group, differences do not approach significance for the proportion of women experiencing a cancer recurrence in the intervention (25.4%) versus control conditions (29.2%; Odds Ratio = 0.83, CI = 0.46 – 1.48, p = .525). There is no difference between the proportion of women who died in the intervention group (21.1%) versus the control condition (26.5%; Odds Ratio = 0.74, CI = 0.40 – 1.36, p = .332). Similar results are obtained if one examines only those deaths due to breast cancer.

My colleagues and I previously published a commentary in Cancer arguing that the authors’ data failed to support their conclusions.

Stefanek, M. E., Palmer, S. C., Thombs, B. D., & Coyne, J. C. (2009). Finding what is not there. Cancer, 115 (24), 5612-5616.

Our commentary was originally submitted as a briefer letter. It was first rejected, with a previous editor citing a standing policy of not accepting critical commentaries if authors refused to respond. This policy essentially allowed authors to suppress criticism of their work, regardless of the validity of criticism. We asked the editor to re-evaluate the policy and reconsider the rejection of our commentary. The editorial board reconsidered and invited the extended commentary that was published. However, the authors still refused to respond, a choice which many would consider extraordinary.

Our criticisms would have been directly addressed if the authors had simply provided a report of positive findings for standard, unadjusted outcomes, such as a Kaplan-Meier estimate of the survival function. We indicated strong reasons why such analyses would not be significant. Regardless of their choice not to respond to our letter, the authors had a responsibility to provide these data.

Instead of results for unadjusted outcomes, the authors had provided in the article dubious multivariate analyses with a high risk of spurious findings. Both unadjusted and adjusted analyses should have been provided. When positive results are obtained with unadjusted outcomes, readers’ confidence in them are increased when findings persist after adjustment for possible confounds. However, when positive results are obtained with unadjusted outcomes, contradictory findings after adjusting for confounds warrant special scrutiny. Interpretation of adjusted outcomes assumes complete specification of possible confounds and measurement without error, assumptions that are not typically tested or met. Analyses with adjusted outcomes are not necessarily more generalizable than those with unadjusted outcomes. On the contrary: Positive findings with adjusted outcomes may variously reflect overfitting, residual confounding, incomplete specification and imperfect measurement of covariates, and covariate selection procedures that capitalize on chance.

The multivariate analyses presented in Anderson et al.’s article control for initial group differences in a number of potential confounds. This is remarkable because the authors also reported ‘‘no significant differences between study arms in sociodemographics, disease, prognostic factors, type of surgery received, or adjuvant treatments.”

The authors relied on a backward elimination procedure to select control variables from a larger pool of at least 15 candidate variables. Eight to ten of these factors were retained as covariates in analyses predicting time to recurrence, death from breast cancer, or death from any cause. Aside from capitalizing on chance, the number of covariates was simply too high relative to the number of events being explained (i.e., recurrences, breast cancer deaths, other deaths). Anderson et al. consistently violated the general rule that predictors should not be added to the equation if the ratio of outcome events to predictor variables is not at least 10:1. For instance, the final model for recurrence-free survival included 11 predictors for 62 events: a ratio of 5.6:1.

You may ask why a claim for retraction is occurring at this time. First, the authors have continued to cite this claim in other publications without acknowledging objections to the analyses. Second, this paper continues to be widely cited and specifically as evidence that psychosocial intervention prolonged survival, a claim that does not otherwise have support. Third, this particular paper served as the basis for federal funding to disseminate training in this intervention. The paper is specifically cited in solicitations ( for training at Professor Barbara Andersen’s Training Institute For Empirically Supported Biobehavioral Interventions for Cancer Patients.

Claims about time to recurrence and cancer-specific and all-cause mortality are claims about biomedical outcomes. I seriously doubt that had such claims concerning chemotherapy or radiotherapy based on flawed analyses in a small sample in a study in which survival was not a predesignated outcome been made, they would go unchallenged. I believe that these authors’ claims should be held to the same standards as other biomedical interventions.

The American Cancer Society website ( has posted a statement

The research is clear that support groups can affect quality of life, but the available scientific evidence does not support the idea that support groups or other forms of mental health therapy can by themselves help people with cancer live longer.

This is in response to widespread beliefs among patients and their families that persons with cancer can somehow boost their immune system and thus extend their chances of survival if they attend such groups.

The Andersen et al. Cancer article that lends credibility to these groups stands in direct contradiction to that statement by the ACS. Either the statement should be revised or the finding should be retracted.

Thank you for your consideration. I await your response with interest.

James C Coyne PhD
Professor Emeritus of Psychology in Psychiatry
Perelman School of Medicine of the University of Pennsylvania
Professor of Health Psychology
University Medical Center, Groningen



  • Dr. Coyne is Emeritus Professor of Psychology in Psychiatry, Perelman School of Medicine at the University of Pennsylvania, where he was also Director of Behavioral Oncology, Abramson Cancer Center and Senior Fellow Leonard Davis Institute of Health Economics. He also served as Professor of Health Psychology at University Medical Center, Groningen, the Netherlands where he taught scientific writing and critical thinking and coached a number of successful ERC, international, and Dutch grants. He was the 2015 Carnegie Centenary Visiting Professor at Stirling University, Scotland. Jim has written over 400 papers and chapters. In 2001, he was designated by Clarivate Analytics as one of the most cited researchers in the world. More recently, he was designated one of the 200 most eminent psychologists of the second half of the 20th century. Besides blogging at Science-Based Medicine, he blogs at Mind the Brain at and teaches a popular workshop, How to Write High Impact Papers and What to Do When Your Manuscript Is Rejected.

    View all posts

Posted by James Coyne

Dr. Coyne is Emeritus Professor of Psychology in Psychiatry, Perelman School of Medicine at the University of Pennsylvania, where he was also Director of Behavioral Oncology, Abramson Cancer Center and Senior Fellow Leonard Davis Institute of Health Economics. He also served as Professor of Health Psychology at University Medical Center, Groningen, the Netherlands where he taught scientific writing and critical thinking and coached a number of successful ERC, international, and Dutch grants. He was the 2015 Carnegie Centenary Visiting Professor at Stirling University, Scotland. Jim has written over 400 papers and chapters. In 2001, he was designated by Clarivate Analytics as one of the most cited researchers in the world. More recently, he was designated one of the 200 most eminent psychologists of the second half of the 20th century. Besides blogging at Science-Based Medicine, he blogs at Mind the Brain at and teaches a popular workshop, How to Write High Impact Papers and What to Do When Your Manuscript Is Rejected.