A vivid memory from my chiropractic college days is a guest lecture on technique. The year was 1972; the doctor delivering the lecture was a well-known "expert from afar." It was made clear that we were privileged to have such a distinguished clinician share his experiences with us.
- He was rich. His slide show was sprinkled with photographs of his twin-engine airplane and fleet of Mercedes-Benz automobiles. The message was simple: "If what I do didn't work, patients wouldn't come to me and I wouldn't be rich."
- He knew it worked! The "it" was the specific procedure being promoted. What did he mean by "worked?" See item 1.
I left feeling buffaloed and bewildered, but the applause from the students and faculty was deafening. I was going to ask a few questions, but realized that nothing productive would likely result from such an action. Apparently this was the nature of chiropractic "research" back then.
In 1975, I was privileged to be one of 16 chiropractors selected to participate in the NINCDS Workshop on the Research Status of Spinal Manipulative Therapy, conducted under the auspices of the National Institutes of Health.1 Distinguished basic scientists, medical and osteopathic clinicians were present. The objective was simply to determine what was and was not known in the basic and clinical sciences. MDs, DOs and DCs were speaking to one another. There was genuine interest in chiropractic philosophy and techniques. I saw a glimmer of hope.
That said, we still have a long way to go. The blustering and buffoonery of nearly 40 years ago is alive and well.
Toward a Solution
There is no "quick fix." Fortunately, there are strategies that can be implemented that address these issues. Such a plan should be multifaceted:
Chiropractic college education. Education at the first professional degree level should emphasize critical thinking. Further, it should equip students with a working knowledge of research methods and clinical epidemiology. The "research ideal" should be presented in the context of its value in establishing "real-world" clinical strategies consistent with chiropractic philosophy.
Continuing-education programs. Field practitioners should be afforded opportunities to learn the rudiments of research methods. To make such courses appealing, practical applications to clinical practice should be emphasized; for example, ways to evaluate analytical techniques and adjusting procedures.
Field involvement in clinical research. Few institutionally based research programs involve active field participation. Some "brand-name" techniques make dubious claims based upon faulty research designs. Field practitioners and promoters of equipment and techniques should be encouraged to actively participate in well-designed, institutionally based clinical research projects.
Financial commitment to research. Chiropractors have often sought political and legal solutions to research problems. We have a rich heritage and have made tremendous progress in achieving licensure, insurance equality, and other triumphs. But as we deal with the challenges of the 21st century, the scientific community and health care consumers are going to demand that we "put up or shut up" when making claims.
The "high rollers" in our profession should give serious consideration to contributing generously to chiropractic research. So should any DC interested in the survival of the profession. Please note that I said chiropractic research. This means research concerning the vertebral subluxation and its effects, not the symptomatic treatment of musculoskeletal pain.
Critical Thinking
There are several criteria that should be used when evaluating analytical procedures: reliability, validity,2 sensitivity, specificity, normative data, critical review and publication, duplication of findings and institutional involvement.
Reliability is a measure of the ability to reproduce findings. Inter-examiner reliability is a measure of the agreement between two or more examiners.
Validity seeks to determine if the measurement in question measures what is claimed to be measured. A technique or measurement may be reliable, but not valid. Furthermore, a measurement may be valid for one application, but not another.
Sensitivity and specificity. If a device or procedure is claimed to detect a specific condition, sensitivity and specificity should be considered. Sensitivity refers to the proportion of individuals with the condition who have a positive test result. Specificity is the proportion of individuals without the condition who have a negative test result. In clinical practice, there may be some "tradeoff" involving sensitivity and specificity.
Normative data. If a measurement is involved, a normative database should be available.
Critical review and publication. Presentation at scientific symposia and/or publication in refereed, peer-reviewed journals means that the paper has undergone critical review. While this does not in any way guarantee the validity of the results, it does mean other investigators have had the opportunity to critically evaluate the work.
Duplication of findings. The case for the device or procedure is strengthened if there is independent corroboration by other investigators.
Institutional involvement. The case for a device or procedure is also strengthened if it is being taught under the auspices of a university or chiropractic college. Research under the aegis of an educational institution is a sign that the procedure is undergoing objective evaluation.
In considering these criteria, chiropractors are cautioned to avoid any and all prejudices that may cloud the evaluation. For example, a new technique should not be subjected to a more burdensome standard than an older, more popular technique.
A solid commitment to research and the critical thinking it engenders is essential if the profession is to grow and prosper in today's scientific environment. You can get involved. The survival of the profession may depend on it.
References
- Historical Perspective: The Research Status of Spinal Manipulative Therapy. Natcher Conference Center, National Institutes of Health, June 9-10, 2005. http://nccam.nih.gov/news/events/Manual-Therapy/historical.htm
- Kanchanaraksa Sukon. Evaluation of Diagnostic and Screening Tests: Validity and Reliability. Johns Hopkins University, Bloomberg School of Public Health, 2008. http://ocw.jhsph.edu/courses/FundEpi/PDFs/Lecture11.pdf
Click here for previous articles by Christopher Kent, DC, Esq..