0 How Do We Know "It Works?"
Printer Friendly Email a Friend PDF RSS Feed

Dynamic Chiropractic – September 23, 1994, Vol. 12, Issue 20

How Do We Know "It Works?"

By Mark Genero, DC
I have a jar of dollar bills in my study. Every time I hear a chiropractor say, "We know chiropractic works," I ask them, "How do you know it works?" If they answer, "Because I see the results in my practice every day," I put a dollar in the jar. It's getting awfully full.

Can we really "know" chiropractic works based on our practice results? What is the basis of this "knowing?" If I asked these chiropractors to set up a little experiment in their practice to demonstrate how they "know" it works, they might say something like the following.

"Let's see, why don't I identify every new patient who has mild essential hypertension, adjust them and see what happens? If they get better, then the adjustment must work, right?"

In fact, this is the very type of "experiment" most chiropractors run in their office on a daily basis. For "hypertension," substitute rash, fever, cold, asthma, back pain, headache, overall general well-being, etc. We adjust our patients and some/all/most get/seem to get better. This is the ongoing experiment that forms the foundation for the often heard statement: "We know it works. Why do we need science to tell us what we already know?" Unfortunately, it's not an experiment at all. It's an observation. Some chiropractors still insist that's all we need.

"I know it works," they say.

"How do you know it works?", I ask them.

"Because I see these kinds of results in my practice every day."

Put another dollar in the jar.

Are these observations enough to allow us to say with confidence, "It works." The answer is no. The most we can conclude from these types of observations is that it might work, but based on these data, we can't say so with any degree of certainty.

The corollary is also true: "It might not work, but based on these data we can't say this either with any degree of certainty." Based on the above type of observations we could however make the following statement with a high degree of certainty: "We believe it works," or: "We believe it gets results." But this won't convince anyone in the coming health care reform.

Why can't we use our practice results as the only criteria to confidently conclude, "It works?" Let's take our group of mild essential hypertensives as an example. If we gather, say, 100 mildly essential hypertensives, take their blood pressure, put them through a course of adjustments, and then measure their blood pressure again at the end, we might get results something like this:

# Patients Diastolic BP
10 >10 mm Hg
10 5-9 mm Hg
40 1-4 mm Hg
10 no
10 >5 mm Hg
20 discontinue care

We could look at these data and say that 60 percent of the subjects had a decrease in diastolic blood pressure after undergoing chiropractic adjustments. Does this prove scientifically that the adjustment is effective in lowering diastolic blood pressure? The answer? "No." But why, the numbers seem so clear. It's just common sense, isn't it? I see these kinds of results in my practice every day."

Put another dollar in the jar.

Let's assume first of all that the blood pressure recordings, which are notoriously variable from one doctor to the next, were accurate. Here are some of the reasons we cannot confidently draw any conclusions from these data.

  1. Did our test group receive any other therapeutic interventions over the course of this "experiment" that might have caused or contributed to these results? For example, did the chiropractor also counsel and start the patient on proper nutrition, weight loss, an exercise program, cessation of smoking, stress reduction, or decreased alcohol intake? Or did the patients do of these on their own? These are common adjuncts in a chiropractic program and every one of them have been shown scientifically to be effective in lowering blood pressure. Our observations don't allow us to sort these all out. So we don't know which therapy, if any, might have caused the observed reduction.

     

  2. The dreaded placebo effect. It has been scientifically demonstrated ad nauseam that when the patient has a strong belief in the treatment, there is a significant bump in healing that occurs, whether the treatment has any real effect or not. If both the patient and the doctor believe strongly in the treatment, the bump in healing is even more pronounced. In other words, belief in the treatment alone can cause a measurable trend toward normalization (usually temporary however). This happens even if the treatment is ineffective. Our observations do not have a built in placebo control group to see if our treatment produced more healing than would be expected from a placebo effect alone. Therefore we don't know if our results might have been totally or partially caused by a placebo effect.

     

  3. Do we have any idea what would have happened to the blood pressure of this group if we hadn't given them chiropractic treatment? We can guess that some of them would have gotten better, some would have stayed the same, and some would have gotten worse. What would the percentage be in each category? We don't know. It could be that our data merely reflect what happens naturally to mild essential hypertensives in the general population. After all, the body is its own powerful healer. Our observations don't have a control group to give us this information. So we can't say what would have happened if we hadn't adjusted them.

     

  4. Twenty percent of the people discontinued care and we don't know what happened to their blood pressure. All doctors have patients that discontinue care. If we could find out what actually happened to their blood pressure and add it into the data, it could make the results more or less attractive. We can assume it would be less attractive, but who knows?

Summing it up, it is possible that the people in this observational study who had decreased diastolic blood pressure:
  • "got better" because of the adjustment

     

  • "got better" on their own

     

  • "got better" as a result of some concurrent treatment

     

  • "got better" due to a placebo effect

     

  • any combination of the above

And what about the 10 people who had an increase in blood pressure? Someone could look at these data and say, "How do you know that the adjustment didn't cause some or all of the increased DBP readings?" We might argue that because 60 percent of the readings went down and only 10 percent went up, if the adjustment affected anything, chances are greater it caused a downward effect. But this is no more than a guess. We can't say with any degree of certainty because our observations aren't set up to give us that information.

Based on these data, a chiropractor, a nutritional advocate, an exercise physiologist, and a psychologist who teaches stress reduction might all equally claim their technique works on blood pressure. A medical doctor could also look at it and claim with equal validity that it was a placebo effect or that our subject's innate healing ability caused the reduction. Who is right? Our observations can't give us that answer.

It's easy to see how an advocate with a strong belief in their system of healing might be lead to an erroneous conclusion when they base that conclusion solely on their practice observations. The issues we've discussed are actually only a few of the many sources of error that can lead us to draw erroneous conclusions from such observational, anecdotal evidence.

Fortunately, we can design true experiments that will address the above issues. A properly designed experiment can tell us with a great degree of certainty exactly which therapy is having an effect, what type of effect it's having, how much of an effect it's having, and whether that effect is significant or not.l Why do we need to know this? We need to know so we can deliver the most effective, most efficient and most cost effective health care we can.

Therefore, the next time you hear anyone say, "We know it works" or "We know this technique gets results," ask them politely to explain how they know. And remember to put a dollar in a jar every time they answer: "Because I see the results in my practice every day."

So the other day I asked a cardiothoracic surgeon: "Why do you perform bypass surgery so much? Don't you know there are no studies that show this surgery results in overall decreased mortality? How do you know it works?" Guess what he said? I'll give you a hint. I put another dollar in the jar.

Mark Genero, DC
Wellesley, Massachusetts


To report inappropriate ads, click here.