Evidence-based practice: this is a challenging topic. Our interactions with "the evidence" may not have been positive.
I appreciate those in our profession who call themselves evidence-based. I appreciate science. I appreciate our attempts to understand the world around us, and the mystery of human health and illness.
Let's start by looking at some of the dilemmas. I quote Charles Simpson, DC, who in his role as vice president of clinical affairs for The CHP Group, attempts to keep up with the evidence regarding complementary and alternative medicine (CAM):
"Here are some of the issues around research in manual diagnosis and treatment, as well as for other alternative approaches. The conditions most often seen are non-specific, e.g., diagnostic categories include many different and distinct clinical conditions. In addition, outcome markers are often very difficult to identify. Treatment is often multi-modal and stripping an intervention down to the 'active ingredient' fatally compromises the intervention. The relationship of the therapist to the patient is usually an essential element in the effectiveness of the treatment."
Personally, I get frustrated by the Cochrane reviews. By attempting to create a higher standard, Cochrane seems to consistently underrate manual and manipulative approaches. I suspect this is related to the factors that Dr. Simpson outlines above. In addition, as Dr. Simpson states, "Trials of manipulative therapy have been plagued by the difficulty in developing a 'sham' manipulation and concurrently controlling for the nonspecific effects of the hands-on practitioner-patient interaction with the theoretically inert sham treatment."
Our work does not lend itself to cut-and-dry clarity about the evidence. Consider one Cochrane document regarding infantile colic.1 Here's part of the conclusion. "Although five of the six trials suggested crying is reduced by treatment with manipulative therapies, there was no evidence of manipulative therapies improving infant colic when we only included studies where the parents did not know if their child had received the treatment or not."
Here is my take: This is an impossible standard. Lets kick the parents out of the room, and thus the baby will get anxious, and still see if we can get a result.
How can we use the evidence? Let's recognize that we are going to pick and choose, we are going to look at the evidence that is called to our attention. And the reality is that human decision-making is an inherently skewed behavior. Look at Daniel Kahneman's Nobel prize-winning work, as outlined in his book, Thinking, Fast and Slow. Our decision-making is driven by factors that we are barely aware of.
Three Common Tendencies Around the Evidence
Evidence is a challenging arena. How do we as a profession respond to this challenge? I recognize that we are all individuals, but here are some tendencies I see. I observe three common approaches and many errors of judgment.
1. First, we have the evidence-based geeks who want to limit themselves to just those methods that have Cochrane or other third-party approval. They tend to get so rigid that they forget about treating the patient right in front of them. They potentially forget to individualize care for the specific patient's need.
2. Second, we have the true believers. In both chiropractic and PT (and I could throw in any form of alternative medicine), there are many practitioners, often not well-versed in science, who become true believers. They focus on one technique and totally believe the technique gurus. They are swayed by slim evidence and tend not to use critical thinking.
In the chiropractic profession, I see some who really want definitive guidelines. They tend to become "orthodox or fundamentalist" in using one technique. How black and white can we get? How clear cut? In my opinion, the world of spinal care is too complex for these answers to work consistently. Nonetheless, these folks publish their findings and insist that they have the way.
This second group, this true-believer tendency, can include scientists and researchers. Even the best clinicians and researchers start to believe their own jargon. They tend to read only the research that reinforces their beliefs, although they might say they only read the research that meets their standards. They tend to interpret research and evidence in a particular direction, and become narrowly focused.
I have noted that so many so-called research papers, especially those that attempt to reach a big conclusion, are basically the author's own opinion, buttressed by the research they like. I call this the "whose research" dilemma.
3. I'll call the third tendency the "head in the sand" people. They tend toward minimal continuing education and study. They might say, "What I learned in school is enough" or "I get great results; why should I change?" I don't know how to pique intellectual curiosity in those who don't have it in the first place.
Best-Practices Approach
Now, I'll get on my soap box and tell you what I think a best-practices approach looks like. I think there is a middle way. Pay attention to the evidence. Be willing to change clinical behavior. Know that we are evidence informed. Use an assess, treat, reassess model in your interactions with your patients. Individualize care. Pay attention to what is working for the patient you are treating. Continue to study and learn; a good day is one in which you learn something new.
Look at your own practice, especially if you have been in practice for a long time. How has it changed? For the elders, do you remember the state of spinal rehab from 15 or 30 years ago? Do you remember Williams' flexion exercises? At that time, the standard of care was flexion-based exercises for all lower back pain.
Use common sense. Pay attention. Know that your intuition is your knowledge and experience talking to you. Make your interventions as individualized, specific and targeted as possible. Use reality checks and functional testing; is the patient improving in an objective, documentable manner?
Have recent published articles in peer-reviewed journals influenced or changed how you practice? If not, you are behind the times and probably not taking advantage of the wonderful advances in our field. Take a look at the 2012 text Human Locomotion, by Thomas Michaud, DC. Michaud has used recent research to deepen our understanding of gait biomechanics and the use and function of orthotics, among other topics.
I know I will upset some of you, but I do not think it is possible or useful to claim to practice true evidence-based musculoskeletal and manual methods. I prefer the term evidence informed. Use the evidence and observe the immediate responses in your office. We are blessed to work with conditions that tend to respond reasonably quickly.
I would say there is not enough strong evidence to guide us into a true evidence-based practice. Look at how others treat the lower back. A family doc probably would prescribe anti-inflammatories, muscle relaxers and pain meds. Not really up with the current evidence. A surgeon wants to know if there is a surgical condition and tends to over-rely on imaging. Again, the evidence shows us that the imaging does not correlate especially well with the condition. Back surgery for back pain is an iffy proposition, with way too many fusions being done. The interventional pain doc wants to find the one thing they can inject. Again, at best an oversimplification.
What should we do for a lower back patient? Our best interventions are likely to be multimodal. Let's outline three possible simultaneous approaches:
- Activate the patient's own self-healing response by explaining what we think is wrong,and how we can help correct it. Others might call this placebo; I prefer "activate the self healing response."
- Find exercises that work for that individual. The evidence, the research, is not exactly focused on finding what works for the one patient you are treating right now.
- As a chiropractor, you are going to be doing some kind of manual work. Whether you are doing soft-tissue and/or manipulation, you are going to be doing a mini clinical trial. Do they feel better right afterward? Do they feel better by the next visit? Your work is informed by the results on the patient you are treating. The evidence gives us guidance and direction. The evidence cannot make our day-to-day decisions for us in as complicated an arena as neuromusculoskeletal pain.
I think the evidence has told us that we need to divide our lower back patients into subgroups. A simple, and relatively well-documented strategy, based on the MacKenzie model, is to figure out which patients are going to respond to extension. Those patients should emphasize neutral posture and extension, should avoid crunches, and need to learn how to get out of a chair and avoid unconscious flexion. (A great educational website specifically for the flexion-intolerant lower back patient is Dr. Phillip Snell's www.fixyourownback.com.)
Another example: We know, based on many research studies, that the inhibition of key muscles caused by spinal pain creates functional instability. But the dominant chiropractic paradigm has been to look for fixation and hypomobility. Have you tried to incorporate this bigger model, looking for instability, into your practice patterns? If not, you are probably ignoring quite useful evidence and not helping your patients as much as you could.
The Mini Clinical Trial
It is very useful to make each clinical session a mini clinical trial. There is evidence that documented progress within a session is a good predictor of a positive outcome.2 This is dramatically different than doing the same thing over and over and expecting a different result. My colleague and teacher, Craig Liebenson, DC, emphasizes this approach in his rehab classes. He teaches the practitioner to find functional tests that the patient is imperfect on, and then show them a movement, an exercise they can do. He has them do several repetitions of the exercise and then does the functional test once again. This helps determine whether the exercise is useful to the patient. Secondarily, it is a powerful motivator to the patient, showing them that they can help themselves.
I tend to do a slight variant on this protocol. I use tenderness; hot spots, as my indicator. I mark the tender point with a marker, as I don't want to fool myself by being 3 mm off. I then either have the patient do an exercise or I perform a manual intervention; a joint mobilization or a soft-tissue technique. I then re-evaluate the tender point. The goal: to figure out what will really change the pattern.
I'll finish by recommending Bill Bryson's book, A Short History of Nearly Everything. Why read this? You can see how science has evolved and how our understanding of what is true has changed over the decades. Bill Bryson is so entertaining. Don't be afraid of the evidence. Embrace learning, welcome change, and continue to improve your knowledge and skills. You will never burn out if you keep learning and changing, and you will help more of your difficult patients.
References
- Dobson D, Lucassen PLBJ, Miller JJ, Vlieger AM, Prescott P, Lewith G. Manipulative therapies for infantile colic. Cochrane Database of Systematic Reviews, 2012. Published online Dec. 12, 2012.
- Hahne A, Keating JL, Wilson S. Do within-session changes in pain intensity and range of motion predict between-session changes in patients with low back pain. Australian Journal of Physiotherapy, 2004;50:17-23.
Click here for more information about Marc Heller, DC.