81 It Ain't Over 'Til It's Over
Printer Friendly Email a Friend PDF

Dynamic Chiropractic – February 26, 2009, Vol. 27, Issue 05

It Ain't Over 'Til It's Over

Taking RCTs With a Grain of Salt

By Anthony Rosner, PhD, LLD [Hon.], LLC

The topic (and title) of today's sermon is drawn from the immortal quotation by baseball icon, Yogi Berra. Back in the '60s, I had the fortunate opportunity to hear the famed science fiction writer Isaac Asimov, who then offered an opinion as to why we do experiments in order to confirm imaginative and sometimes brilliant hypotheses: "To convince the idiots," he proposed. The same type of thinking might be applied to randomized clinical trials (RCTs) which, in the course of their being executed, sometimes display results that are so dramatic that it becomes unethical to continue the conduct of these investigations.

Such results may either be beneficial or adverse, telling us in either case that the trial is no longer following the rule of clinical equipoise and therefore must be halted. In other words, it appears to no longer be an open question as to whether the experimental intervention is conferring a benefit or harm.

This raises an intriguing ethical and logistical question: When exactly should one pull the plug on a clinical trial? And who exactly should be the one to call, "Game over"? In the past, authors have warned trial monitors against catching outcomes at a "random high" and snuffing out the party before a regression to the mean sets in with more conservative and possibly neutral outcomes.1 Given decent follow-ups, the true outcome could become more disastrous and costly than a simple slip on a banana peel.

A thought-provoking treatise by Montori and colleagues published within the past few years brought this entire question up for discussion.2 If there is an early end to a trial because of a perceived adverse effect by the experimental intervention, the intervention in question can be expected to fade away or be dropped altogether from further consideration. However, if the trial is capped due to a whopping beneficial effect, the intervention tends to find quick approval and dissemination. For a number of reasons, however, this so-called panacea may turn out to be fool's gold.

The Montori study found that of the 143 RCTs the investigators systematically reviewed, 92 appeared in five high-impact medical journals, many evaluating cardiovascular or cancer interventions and funded by for-profit agencies. Because the trials were stopped early, planned sample sizes and follow-up provisions went by the boards. The treatment effects, implausibly large, were inversely related to the number of events reported. The most damning evidence was that only eight (5.6 percent) of these trials reported four key methodological issues demanded in thorough investigations:

  1. planned sample size;
  2. interim analysis after which the RCT was stopped;
  3. stopping rules used to guide the decision; and
  4. adjusted estimates for interim analysis and early stopping.

With all of these omissions and shortcomings, the incidence of truncated RCTs due to presumed beneficial effects of the intervention is increasing. Perhaps the driving force is increased publicity and cost savings for funding agencies and investigators. Just having gone through a major presidential election, it is easy to envision a glaring analogy: CNN and other television networks called the election results hours (and even days) before the hard ballots were actually tabulated. Indeed, the election was called at the moment polls closed on the West Coast. You couldn't help but wonder whether the rare result could emerge in which the opposite candidate actually won the election - just as Harry Truman emerged victorious after a night in which the Chicago Tribune, among other sources, declared Thomas Dewey the winner in the presidential election of 1948.

This is where blinding in a critical area becomes all the more crucial. Quite simply, neutral outcomes assessors with no conflicts of interest regarding outcomes or recruitment are required in order to provide sober and continuous monitoring of the clinical trials, calling a halt only when there is adequate statistical justification for doing so. This issue came up at a recent conference of the Interdisciplinary Network for Complementary and Alternative Medicine in Toronto, at which I was a presenter. It emerged from a discussion from a network of homeopathic providers under the direction of David Brule.

This cautionary tale raises the larger issue of having to rein in the overexuberant use and interpretation of randomized clinical trials. For starters, James Weinstein, as editor-in-chief of Spine, has suggested establishing a National Clinical Trials Consortium (NCTC).3 This would be in light of the pressures of ever-increasing fractions of trials supported by industry rushing pell-mell to gain swift FDA approval in order to bring a device or drug to market. As a counterbalance, Weinstein proposes establishing the NCTC, made up of physicians, surgeons and their PhD colleagues, with oversight from independent professional and specialty societies as well as the public. Its role would be to provide greater face validity and less bias to such trials less susceptible to conflicts of interest.

On a broader scope, regarding the design of the investigation itself, Downs and Black have offered what arguably seems to be the most comprehensive checklist for assessing the quality of a clinical trial.4 Much like a Christmas shopping list, that roster would include, among other variables to be assessed:

  • hypothesis/objective
  • main outcomes
  • characteristics of patients
  • interventions of interest
  • distribution of confounders
  • main findings

The even broader question (cutting to the chase) is whether the RCT is even capable of fully answering questions regarding treatment effectiveness in alternative medicine. Such has given rise to proposals of whole-systems research, which have been extensively discussed elsewhere.5,6 For the time being, avoid some of the excesses when it comes to carefully projecting how patients will respond to a given clinical treatment in the experimental sciences. In other words, handle RCTs with care and sobriety.

References

  1. Pocock S, White I. Trials stopped early: too good to be true? Lancet 1999;353:943-4.
  2. Montori VM, Devereaux PJ, Adhikari NKJ, et al. Randomized trials stopped early for benefit: a systematic review. JAMA 2005;294(17):2203-9.
  3. Weinstein JN. An altruistic approach to clinical trials: the National Clinical Trials Consortium. Spine 2006;1(1):1-3.
  4. Downs SH, Black N. The feasibility of creating a checklist for the assessment of methodological quality both of randomized and nonrandomized studies of health care interventions. J Epidemiol Community Health 1998;52:377-384.
  5. Hawk C, Khorsan R, Lisi AJ, et al. Chiropractic care for nonmusculoskeletal conditions: a systematic review with implications for whole-systems research. J Altern Complement Med 2007;13(5):491-512.
  6. Verhoef MJ, Lewith G, Ritenbaugh C, et al. Complementary and alternative medicine whole systems research: beyond identification of inadequacies of the RCT. Complement Ther Med 2005;13:206-12.

Click here for previous articles by Anthony Rosner, PhD, LLD [Hon.], LLC.


To report inappropriate ads, click here.