A 4 minute read…
Do people with the same musculoskeletal diagnosis need the same treatment to get better? Do those with tendinopathy always need to load it? Do those with chronic low back pain always need pain education or spinal manipulation? Well, the simple answer is no, but also yes! Confused? So am I… so let me try to expand a bit more.
The topic of using standardised evidence-based treatments and protocols for specific pathologies and diagnoses in musculoskeletal physiotherapy is something I have always been interested in. But I have been thinking about it more recently due to the growing research that highlights there is very little difference in outcomes with different treatments for individuals with similar problems or pathologies (ref, ref, ref).
IT’S A SHAM!
This is also true when our physio treatments are occasionally compared to shams or placebos, with research again often finding very little difference in outcomes (ref, ref, ref). Now, I don’t want to get too nihilistic or despondent here but the awkward and uncomfortable truth for musculoskeletal physiotherapy is that a lot of its treatments just don’t seem to do much over and above placebo, time, and natural history, be that massage or manipulations, needles, tapes, suction cups, scrappy tools or electro machines that go bing.
This is also true for our exercise-based treatments, be that simple strengthening, stretching, or mobilising exercises, even the overly complicated motor control corrective exercise claptrap. They all seem to do very similar things for most people with similar problems and pathologies, and that is they don’t do much more than distract people in pain whilst natural history kicks in (ref).
So this has left me wondering if it really matters that much in the big ol’ grand scheme of things what the hell us physios get our patients doing if it all has similar effects and is all not that much better than doing nothing? Does it really matter if someone with back pain gets spinal manipulations or deadlifts? Does it really matter if someone with patellofemoral knee pain gets k-tape or glute exercises?
Well, again yes it does and no it doesn’t! Still confused… well hold on a bit longer and I will try and clarify soon!
EVIDENCE-BASED PRACTICE AND GLACIERS
Since physiotherapy began to research and investigate what it does there has been a slow shift towards using evidence-based interventions, and I mean a really slooooow shift… think glacial speeds. Anyway, this slow adoption of evidence-based treatments has thankfully meant a reduction in less effective treatments and outright woo and quackery being used within the profession. But it also means more people with similar diagnoses are now given similar looking treatments and protocols.
A classic scenario here is someone with Achilles tendinopathy. Ten years ago, they would have had a wide range of treatments from tendon friction massages, calf muscle massages, taping, acupuncture, therapeutic ultrasound, as well as a range of stretches and exercises of the lower leg, and other areas to address so-called dysfunctional movements thought to have caused the pathology.
However, since more research has been conducted into Achilles tendinopathy most are now simply given advice, load management, and specific exercises to load the tendon. Often these exercises are slow, heavy, eccentric, types of exercise to reflect those used in the well-known research trials.
Now don’t get me wrong, I think the removal of the wasteful and outdated treatments is great, I mean friction massages were a bugger for my fingers and thumbs. But, where I think this adoption of evidence based practice is not such a good thing is when treatments are limited to only what the research trials did.
THE INDIVIDUAL WITHIN THE MEAN
Limiting our clinical treatments to exactly replicate what was conducted in research trials fails to recognise an individual’s response to an evidence-based treatment. Only using one particular type, style, or dose of treatment based on a research trial protocol is a failure of evidence-based practice and sound clinical reasoning.
It’s important to remember that a lot of research trials report their findings as an ‘average effect size’. This is often presented with ‘confidence intervals’ to show the spread of that effect over 95% of the subjects in that trial, meaning in very simplistic and not totally accurate terms, the smaller or bigger the spread of the confidence intervals, the smaller or bigger the variation in that treatment’s effects.
Often what happens in research trials is a few individuals get an amazing response from the treatment, some get a good result, others average, some only have minimal effects, and a few get negative and adverse responses. The reasons for this variation are complex and uncertain but often it’s due to differences in subjects’ characteristics such as their health, culture, concerns, beliefs, past experiences, occupational and/or social status.
It can also be due to variations in the clinicians or researchers characteristics who are conducting the trial, such as their training, experience, beliefs, and biases as well. And finally, it can also be due to differences in the application or administering of the treatment such as variations in setting, location, instructions, level of compliance, and timing of the outcome measurement.
All these differences in patient, clinician, and intervention characteristics are termed ‘clinical heterogeneity’ and they all have the potential to significantly confound research results which can lead to inaccurate conclusions being drawn about what does and doesn’t work, and why, and for who. Currently, a lot of healthcare research does a poor job at recognising and/or controlling for clinical heterogeneity and as a consequence, this can and does affect the results and conclusions often made (ref).
SCIENCE IS BROKEN!
Now before some of you rush down to the comments section to tell me “Science is broken” or “I don’t need research to tell me what treatments work” just stop because I am not saying research can’t tell us what treatments do or do not help people. I am just highlighting that often there is a lot of uncertainty in the how much, the why, and the who they can or cannot help.
I am also not saying that we should suddenly ignore research or abandon the scientific method of investigating our treatments, going back to our clinical observations which have consistently been shown to be unreliable and full of biases (ref). What I am saying is that we should recognise the variations in responses to our evidence-based treatments and be more flexible and less specific and constrained in our prescriptions and application of them.
This however is not a green light to go wild and totally off-piste from research-based guidelines, rather to consider tinkering and tailoring with evidence-based treatment parameters and prescriptions more to fit the individual in front of you or based on their response.
Just because one particular type, method, or dose of exercise or even manual therapy has been shown to help individuals with a particular problem or pathology in a randomised controlled trial, this doesn’t mean it helps everyone equally. This means you don’t have to use the same treatments for all people with the same diagnosis, and you can fiddle and adjust the settings, parameters, dosages, and application of a treatment when in the clinic to suit your patients response or situation.
- Friction massages suck ass… stop doing them!
- Research isn’t perfect but it’s the best method we have to work out what may or may not help our patients, so embrace it!
- Look at the effect size of a treatment to get an idea of how much it may help your patients… but don’t get too excited!
- Look at the confidence intervals to get an idea of how much variation there may be in that effect!
- Look at the subjects in the trial and consider if they reflect your patients.
- Be more flexible in your prescription and application of evidence-based treatments
- Remember there’s always an individual response buried within a mean effect.
As always thanks for reading