24 Experimental and Investigational: Evidence-Based Practice or Just an Excuse to Deny Benefits?
Printer Friendly Email a Friend PDF RSS Feed

Dynamic Chiropractic – October 24, 2005, Vol. 23, Issue 22

Experimental and Investigational: Evidence-Based Practice or Just an Excuse to Deny Benefits?

By Robert Mootz, DC

"Honest, all my patients get results with this procedure," says Dr. Jones to the claim manager at Amalgamated Health Insurance, Inc. "Well, the guideline says the procedure is investigational, therefore, we can't pay for it," replies Claim Adjuster Smith.

"Investigational! That's absurd," retorts Dr. Jones. "Why, not only has it been proved by me personally, time and time again on my own patients, but the FDA has approved it! And not only that, the CPT editorial committee has approved a billing code for it! And do you realize they only ever issue codes for procedures that have been validated by an 11-member panel of medical experts!" To which Adjudicator Smith offers a hasty comeback, "Well, my dear Dr. Jones, the guideline we use was developed by the American Academy of Really, Really Expert Physicians and Therapists (AARREPT), which included comprehensive multidisciplinary review by over 23 internationally renowned experts." And so it goes in the modern world of dueling health care experts...

You Mean the Experts Aren't Always Right?

Board-certified internists show a significant drop in their medical knowledge after only four years of practice and two-thirds of them could not pass a current board examination after 15 years in practice.1-2 Is it likely DCs could do any better? Last year, the National Library of Medicine added over 10,000 new articles to its databases every single week. Our friends the internists need to digest about 20 articles daily just to stay current in their field. Fortunately, chiropractors can stay fairly current by reading just nine issues of JMPT every year. Well, and by starting up a journal club and checking out a few other spine, physical medicine, and CAM journals on a regular basis. Superimposed on knowledge decay over time and the information overload are variations related to doctors' practice preferences and biases, not to mention differences in training. Is it any wonder expert opinion is so variable? I suspect it's possible that someday, chiropractic experts might even disagree about something.

What? FDA Approval Isn't Good Enough?

You may recall that the drug rofecoxib (Vioxx) was pulled off the market last year following reports that patients taking it had a twofold higher rate of cardiovascular events. But did you know that reports of higher cardiovascular complications appeared in the medical literature three years before the U.S. Food and Drug Administration acted?3 You also might not be aware that although rigorous standards of evidence of clinical effectiveness are required for FDA approval of drugs, such is not the case with medical devices.4 They need only to be demonstrated to be "safe" and to do physically what is claimed (e.g., put out the advertised amount of voltage). Another way a medical device can be approved by the agency is to persuade them that the new device technically works like a previously approved device. In either case, claims of clinical value - i.e., clinical effectiveness - do not factor into FDA approval of medical technology. No clinical trials required. And there is no government review of medical procedures, per se, at all. I know, it's shocking to learn that the government isn't perfect, either.

Surely, coverage decisions by insurers really are only about denying care to save money, right?

Insurance companies don't fare any better in the expert arena, either. But the insurance business is a lot more complex than saving money. In fact, since profit margins arise as a percentage of total business, insurance companies can actually have an incentive for high overall medical expenditures. However, this is attenuated by what the market can tolerate for premium costs. Needless to say, insurance companies rely on "experts" as frequently the rest of the system, with all of the same inherent limitations.

Payers typically use three "tools" to decide whether or not a health care service is paid for: benefit package limitations; medical necessity determinations; and coverage decisions. A benefit package limitation is simply a cap on the amount of services they will buy; for example, 10 visits (or some dollar amount) of physical medicine per year. There also are many strategies to package benefits by provider type. Medical necessity determinations are more individualized, and basically entail documenting whether the requested health care service is appropriate for that particular patient. The issue here is not whether the procedure has research supporting it. It's about whether it's appropriate given the specific circumstances of the case. Preapproval for surgeries, extended care, advanced imaging, etc., all fall into this category.

Coverage decisions, however, are based on an overall population level, thus relying heavily on the state of the medical and scientific literature, and resultant "dueling experts'" opinions about that literature. This is where the whole issue of "experimental" and "investigational" comes into play. Increasingly, when a coverage decision is made, someone at a claim manager level is unlikely to be able to make an exception. Rather, it's a policy- level decision and the kinds of arguments made by Dr. Jones will likely have little success in getting a service paid for.

When high-quality scientific literature (e.g., well-designed clinical trials and large population studies) exists that clearly shows something is effective or ineffective, the decision to buy it or not is easier to make. However, when only lower quality studies exist (such as case reports, case series, or expert opinion), it becomes much more difficult to make a coverage decision. Additionally an intervention may have a number of higher quality studies published, but the results of different studies conflict. When definitive literature doesn't exist, or when expert opinion is in conflict, the label "investigational" gets used. Sometimes, effectiveness is demonstrated for something that doesn't really matter, such as an intermediate physiological measure that doesn't correlate with a meaningful outcome. An example might be radiographic demonstration of successful fusion after surgery, but the patient still can't move and has pain. Another might be improved range of motion, but the patient is still dependent on care for palliative relief and can't return to work.

When the literature is indeterminate or in conflict, payers frequently will decide not to cover something, particularly if it has a high risk or is expensive. Often, if only lower quality evidence is available, but it consistently suggests value and expert opinion agrees, an intervention may be covered (again, provided it is not overly expensive and/or of high risk). As you might expect, there is a lot of variability across insurers about what to cover. Increasingly, government purchasers and larger payers are making their coverage decisions publicly available, along with what information their decisions were based on.

Health-quality initiatives are increasing the sophistication by which scientific literature is considered, and both payers and doctors have to have more knowledge of what's "under the hood." There is still no shortage of dueling experts; however, their influence is coming under much greater scrutiny. Just because a medical director of an insurance company or a trade association's panel of experts says so, doesn't necessarily make it so. The arguments these days center on processes used to appraise literature, the quality of that literature, the relevance of the findings to the patient population to be covered, as well as what interests the various "experts" may have in process.

References

  1. Chassin MR. Is health care ready for Six Sigma quality? Milbank Q 1998;76(4):565-91,510.
  2. Chassin MR, Galvin RW. The urgent need to improve health care quality. Institute of Medicine National Roundtable on Health Care Quality. JAMA, Sept. 16, 1998;280(11):1000-5.
  3. Mukherjee D, Nissen SE, Topol EJ. Risk of cardiovascular events associated with selective COX-2 inhibitors. JAMA, Aug. 22-29, 2001;286(8):954-9.
  4. Ramsey SD, Luce BR, Deyo R, Franklin GM. The limited state of technology assessment for medical devices: facing the issues. Am J Managed Care 1998;4:SP188-199.

Robert Mootz, DC
Associate Medical Director for Chiropractic,
State of Washington Department of Labor and Industries
Olympia, Washington



Click here for previous articles by Robert Mootz, DC.


To report inappropriate ads, click here.