[Editor’s note: Dr. Gorski has, for the first time in a few years, managed to get the week between Christmas and New Years off of work for his day job. So he’s decided to take the week after Christmas off from blogging as well—although if something really egregious happens, he might not be able to resist writing about it as an “bonus post” here or on his not-so-super-secret other blog. (Probably not, though, as a break will recharge his blogging batteries, which are definitely depleted after a long and painful 2021.) This week, we’re fortunate to have a guest blogger whose work we’ve published before provide us with an update on dubious stem cell therapies. Dr. Gorski will, barring an inability to restrain himself, see you next year on January 3! Happy holidays and happy New Years!]
In many cases, flawed or misleading evidence is worse than no evidence at all. This is because the state of ignorance resulting from a lack of evidence is recognized as a state of ignorance, whereas the state of ignorance resulting from misleading evidence is not so recognized.
– Berger and Alperson, “A General Framework for the Evaluation of Clinical Trial Quality”
The premature promotion and sale of stem cell therapies by medical entrepreneurs has been a frequent topic on this site. I previously wrote a three-article series about US Stem Cell, a clinic in Florida that gained unwanted attention when reports emerged of patients being blinded after receiving stem cell therapy for macular degeneration. Part 1, Part 2, Part 3. Ultimately, the Food and Drug Administration (FDA) acting through the federal courts put a stop to U.S. Stem Cell’s particular stem cell concoction.
There is another stem cell operation in Florida worthy of a critical appraisal. Like US Stem Cell, they are using unproven stem cell therapies for diseases of the eye. Unlike US Stem Cell, this group is treating patients under the thin veneer of a clinical trial.
So-called “pay to play” clinical trials like SCOTS are funded by study subjects who are required to pay thousands of dollars for the privilege of receiving “experimental” treatments. David Gorski of SBM deserves recognition for writing about this dubious practice, as has Jann Bellamy.
Bioethicist Leigh Turner has been quite vocal. His most comprehensive indictment, “ClincalTrials.gov, stem cells and ‘pay-to-participate’ clinical studies“, is unfortunately behind a paywall. He has much to criticize regarding the investigators who run the studies, but he does not spare criticism for governmental agencies responsible for the health and welfare of Americans.
Introducing SCOTS (Stem Cell Ophthalmology Treatment Study)
According to the listing on clincaltrials.gov, SCOTS was launched in August 2012, with an estimated end-date of 2020. It is listed as an interventional, parallel assignment, non-randomized, open label, treatment study, using autologous bone marrow derived stem cells to treat a variety of eye conditions (more on these details will be discussed later). The target sample size is 300 total patients. Clinicaltrials.gov lists recruitment status as “unknown.”
The Sponsor is MD Stem Cells and there are 2 investigators; Steven Levy, MD is the Study Director, and Jeffrey Weiss, MD is the Principal Investigator. Both are listed as affiliates of MD Stem Cells.
Documentation elsewhere reveals that all study procedures, including harvesting of bone marrow (done by an orthopedic surgeon) and administration of the study interventions of stem cells by Dr Weiss were done in Florida. Patients paid for the privilege of participating in this study, with a price tag of approximately $20,000.
High quality clinical trials are considered gold-standard evidence for the evaluation of clinical interventions. This article will focus on the SCOTS, evaluating the strengths and weaknesses of the study, as well as the reliability of the data reported. I have read the clincaltrials.gov entry, MD Stem Cell website, all of the publications from the SCOTS trial, and reviewed some stories in the lay press relevant to the study.
Is SCOTS a high quality clinical trial?
No, it is not a high quality trial!
The design of the SCOTS study abandons the qualities that confer high quality clinical trials gold-standard status. Missing are features like a coherent hypothesis, a well-defined study population, randomization, matched study groups, an appropriate control group, blinding, standardized measurement of critical variables, and adherence to pre-specified analysis methods. These design flaws make the study unsuitable to answer the questions the study purports to address.
To make matters worse, the SCOTS investigators have chosen to publish their results in the form of case reports and small uncontrolled case series, representing curated subsets of patients, vacating the pretense of a clinical trial altogether (tricks used in other questionable, for-profit clinical trials).
Clinical trials usually limit enrollment to a single disease or very narrowly focused category of conditions. In contrast, the SCOTS entry criteria cover an incredibly broad range of diseases. Virtually any condition of the retina or optic nerve that can cause mild to severe loss of visual function could be eligible. It includes broad spectra of genetic, degenerative, traumatic, vascular, and inflammatory conditions. The MD Stem Cell website currently says that they have treated over 47 different eye diseases (accessed 10/30/2021).
Such a broad set of eligible diseases raises concerns. It is optimistic to think that a specific treatment modality will be effective in such a broad spectrum of diseases. Distributing 47 different diseases in a trial of 300 patients makes it nearly impossible to sort out which conditions may be helped, harmed, or unaffected by a novel treatment.
If it was one’s intent to design a study such that the results would be uninterpretable, it would be hard to do better than the SCOTS authors.
The SCOTS trial is identified as a parallel group design. This what you probably imagine as a prototypical clinical trial. Patients are enrolled and split into 2 or more groups (or “arms”). Members of each group are assigned one of the experimental treatments or to a control group. At the end of the study the groups are compared. In order for the comparison to be valid the groups should be as similar as possible, except for the assigned treatment. This is best achieved by randomly assigning patients to groups. Here, the SCOTS authors have already deviated from best practice. In this trial the subjects are not randomly assigned to treatment groups. In fact, the groups are defined to be different from the start with differing inclusion criteria, and there is no control group.
All the patients have stem cells harvested from their bone marrow. The cells are subsequently delivered back to the patient in 5 different ways:
- Retrobulbar (RB): stem cells are injected behind the eye.
- Subtenons (ST): stem cells are injected outside, but adjacent to the eye.
- Intraveous (IV): stem cells are given by intravenous infusion
- Intravitreal (IVIT): stem cells are given by injection into the vitreous (inside the eye)
- Intraocular (IO) patients are given a vitrectomy (surgery to remove the vitreous humor) followed by the injection of stem cells into one or more of 3 locations:
- into the vitreous
- under the retina
- into the optic nerve
The IO group consists of 3 experimental treatment modalities, making for a grand total of 7 experimental interventions.
These 7 experimental stem cell treatments are distributed among three treatment arms in a non-random fashion. This method of assigning patients to treatment groups is described in the publication, but not disclosed on clinicaltrials.gov. Assignment is based on a variety of criteria, including vision, disease stage, visual field, and medical status. The criteria are vague and somewhat subjective. Because these groups are mismatched from the start, it will be nearly impossible to sort out the treatment effect from the baseline differences.
Here is how stem cells are administered among the 3 treatment groups. Note the absence of a control group.
Group 1: RB+ST,+IV
Group 2: RB+ST+ IV,+IVIT
Group 3: RB+ST+IV+IO
So, in the end we have 300 patients with 47 different diseases subject to 7 experimental treatments among 3 treatment groups without a control group. What could possibly go right?
This is not good science. One can’t identify treatment effects when sorting 7 experimental treatments among 3 treatment groups. If, for instance, IV stem cells turned out to be a “magic bullet”, the effect would never be detected because it is common to every group. The design would also preclude discovery of a harmful effect of RB, ST, or IV. Mixing in 47 different diseases just adds to the confusion. The various permutations of interventions and diseases plants a fertile orchard for the harvest of cherry-picked data.
Visual acuity in the SCOTS trial
SCOTS uses visual acuity as the primary measure of success or failure. This is an appropriate parameter for the diseases being studied, but vision is a deceptively difficult thing to measure. In prospective trials using visual acuity as a critical endpoint, investigators usually maximize efforts to standardize measurement. Standard charts are used at a standard distance using standardize lighting conditions. Patients are refracted to ensure that they are measuring optimized vision at each visit. Even with all these protections, visual acuity is a somewhat “noisy” variable. It depends on testing conditions as well as patient motivation and attention. The interaction of the subject and the tester can influence performance in visual acuity. If at all possible vision testers should be blinded to the treatment of the subject they are testing. In many trials, visual acuity testers follow a standard script to minimize bias.
The SCOTS study reports change in vision as the primary outcome parameter, a common practice in clinical trials. This requires valid vision measurement at 2 points in time. It is important to specify not just how but when vision will be tested. Is it the change from baseline to 6 months? Baseline to 12 months? If it is 12 months, what if the patient misses that visit? Will the data from that subject be excluded? Will the 6-month data be carried forward? All of these decisions should be pre-specified and applied consistently in the data analysis.
In the SCOTS study it was impossible to standardize the measurement of vision. Patients had baseline vision measured with the SCOTS investigator. For follow-up visits, allow me to quote from one of their publications:
As the patients came from distant geographical locations, postoperative examinations at 1, 3, 6, and 12 months were performed by the patient’s local eye physician.
In other words, the measurement of the post treatment visual acuity was completely outside the control of the investigators. We will see later that they sometimes they used preoperative vision that was also measured outside the investigators control. This is a major problem since change in vision is the primary outcome variable in the study. This is also a recipe for lots of missing data and has great potential for bias. Patients who perceive that they are doing well may be more motivated to have their “local eye physician” send data back to the study investigators. Investigators might be more motivated to solicit data from patients who they expect to be doing well.
In most clinical trials of similar design, the main publication of a clinical trial is generally a comprehensive report of the primary analysis of the between-group comparisons, focusing on the primary outcome variable. In the SCOTS, that would be a comparison of the change from pre-treatment to post-treatment vision compared among the 3 treatment arms (approximately 100 patients in each group). A report like this should account for all the patients. How many were enrolled? How many completed the study? How many withdrew? How many missed the visit during which post treatment vision was measured? How many died before completing the study? It would also give a detailed amount of the analysis. How many patients contributed to the results? How were missing data addressed? To date, no such report has emerged from SCOTS.
Instead the SCOT investigators have made the unusual decision to publish results as ad hoc disease-specific case-reports and case-series. This mode of reporting completely undermines the pretext for performing a controlled clinical trial.
The SCOTS study is said to have begun August 2012. The planned enrollment was 300 patients with expected completion July 2020. The clinicaltrials.gov listing was last updated Oct 23, 2019 (accessed Oct 30, 2021), with recruitment status listed as “unknown”. Per email communication with the Study Director, the study is closed to enrollment, and they are now enrolling in SCOTS2.
The first publication for SCOTS appeared in 2015 was a case report of single patient. The next two papers appeared in 2015 and 2016 were also single-patient case reports. They were all in open access non-ophthalmology journals. It is worth mentioning that among the 3 case reports were numerous violations of the Health Insurance Portability and Accountability Act (HIPAA) including a patient’s date of birth. This is a major breach in patient confidentiality and a violation of the standards demanded by every Institutional Review Board (IRB), and in contradiction to the standards the investigators asserted that they followed.
In total I have found ten publications from SCOTS reporting on a total of 79 patients. Each report is on a small set of patients with a specific disease or narrow spectrum of diseases. We don’t know how many subjects were actually enrolled but the target was 300, so 79 patients is likely a small subset. In more recent reports, they have been combining patients from SCOTS (potentially 300 patients and SCOTS2 (potentially 500 patients) so current reports account for selected patients from a pool of up to 800 patients. No report has accounted for the missing patients.
Publishing single case success stories, and small case series of an ongoing comparative trial is antithetical to the purpose of a clinical trial. These case reports and case series are linked in clinicaltrials.gov and within MD Stem Cell website. This is quite questionable from an ethical perspective. It could be misleading and unduly coercive for potential study participant to see selected highly curated successes cherry-picked from an ongoing clinical trial of experimental treatments.
Clinicaltrials.gov is thin on the details of the conduct and analysis of the study. We learn a lot from the publications. Numerous irregularities and inconsistencies are found. Many questions remain unanswered.
More visual acuity questions.
The inclusion criteria for the study discuss vision as “best corrected” visual acuity. Best corrected means that a refraction was done; in other words a tedious process of finding the optimal set of lenses for the patient to use when reading the eye chart. This is important because we want to know the level of the patients’ vision due to their retina or optic nerve condition, not because the forgot their glasses, or have glasses that are not optimal. This is standard procedure for trials using vision as a primary endpoint. But in the publications, baseline vision is listed as “PH” which is an abbreviation for “pinhole”. Pinhole vision is a quick and dirty way of approximating best corrected vision, but is not a suitable substitute when a protocol specifies best correct visual acuity. This contradiction of specifying best-corrected vision in the entry criteria and reporting PH vision in the results of the same paper is repeated in several publications. Sloppy measurement of baseline vision could allow inclusion of ineligible patients, and makes biases toward improved vision if it is measured more carefully at follow-up visits.
This contradiction later becomes partly explained, but the explanation does not make the situation better. I assumed that baseline vision was what was measured by the investigator at before the stem cell intervention. There would be no reason to think otherwise, except that in some of the papers they explicitly state that they used “historical” vision measured at some outside provider before the patient was enrolled. In these cases measurement of the primary outcome variable was outside the investigators’ control at both critical timepoints: baseline and at follow-up.
In one paper, they confessed to using different sources of baseline vision based on what made the data look better. This was in a six-patient case series. In the authors’ own words:
In the remaining patient (#5) the visions were potentially minimally decreased when compared to immediate preop vision in the Principal Investigator’s examination lane. However the post-operative visions which were obtained in his primary ophthalmologist’s examination lane were unchanged when compared to preoperative visions obtained in the same lane. Therefore the visions were ultimately judged to be unchanged.
They appear to use historical visual acuity data in some of their papers, PI-measured visual acuities in some papers, and a mix of both in at least one paper. This is p-hacking at the most granular level and the quote above is a startling confession.
There are many other irregularities with how visually acuities were reported and analyzed. These irregularities are too esoteric to discuss in detail, but raise serious questions about the investigators’ ability to perform valid, unbiased analyses of their own data.
Conclusion: Incoherent design, haphazard conduct, selective reporting, uninterpretable results
Due to the incoherent design, the haphazard conduct, the flawed analysis, and selective reporting of results, SCOTS is not a reliable source of information about stem cell therapies for ocular disease.
Now that the investigators have completed a study with unreliable, but self-serving results, what do they do next?
Yes, just when you thought it was safe to seek medical care in Florida comes SCOTS 2. Clinicaltrials.gov lists SCOTS 2, featuring the same investigators, now with an additional site in Dubai. SCOTS 2 is virtually identical to SCOTS, the main difference is a bit more elaboration about Group 3. This seems to be documentation of the unwritten practices of the original SCOTS. SCOTS had a sample size of 300 patients (at about $20,000 per subject); SCOTS2 has a sample size of 500 subjects (at approximately $20,000 per subject.) Estimated starting and ending dates of SCOTS2 are 2016 and 2024, respectively.
There are many questions I would like to ask the SCOTS investigators. Here are just a few.
- Was SCOTS optimally designed to establish the safety and efficacy of any of the 7 investigational stem cell treatment for ocular diseases?
- Where are the data regarding the other 200+ SCOTS patients?
- Did SCOTS adequately establish the safety and efficacy of stem cell treatment for ocular disease?
- If “no,” how do you justify doing an additional clinical trial using a failed study design?
- If “yes,” how do you justify doing an additional clinical trial for a question that has already been answered?