Randomized, controlled trials are often considered some of the best research studies for determining cause-and-effect relationships. Yet, an analysis published in Anaesthesia in October 2020 reports that out of all these types of studies submitted to the journal between February 2017 and March 2020, 44% revealed false patient data.
This can be quite concerning for practitioners who are diligent and want to rely only on high-quality research that contains factual information. To help determine whether a study may be reliable or if it is potentially flawed and, therefore, inaccurate, here are a few questions to ask.
- Who (or what) were the study subjects?
There are many benefits to testing biomedical interventions on animals, according to researchers at Stanford Medicine. They include some animals being biologically similar to humans, such as mice, and also being vulnerable to many of the same diseases. Plus, since they have a shorter life cycle, it’s easier to study animals not just through an entire cycle, but also across generations. This can help give a more complete picture of longer-term effects.
At the same time, organizations such as the Humane Society of the United States contend that animals are “very different from humans and, therefore, react differently.” They also don’t always get the same diseases, nor do animal-based studies mirror the way that humans (or diseases) respond to certain interventions.
Research published in the Cambridge Quarterly of Healthcare Ethics agrees, citing concerns about animal research reliability when applied to human subjects—even suggesting that animal-based research might do humans more harm than good.
When possible, look for research involving human subjects. This may be difficult to find for newer, more innovative treatment options. But if you can find it, it may provide more comfort that the effects reported could be similar to those experienced by your patients.
- How broad was the pool of subjects?
Research involving a higher number of subjects is generally expected to be more reliable than a study that only contains a handful of participants. But another factor to look at is how expansive the pool of subjects was in terms of demographics.
For example, in a 2021 randomized controlled trial to determine what effect chiropractic could have on migraines, all the study participants were adult women. In a 2020 study that sought to learn more about the effects of chiropractic on strength, balance and endurance in people with low-back pain, the study participants were all active-duty military personnel.
Studies such as these may be good if you’re looking for research to support the use of a certain modality or treatment for a patient in that demographic. Yet, the same results might not be achieved when used on individuals with different demographics. So, keep this in mind when deciding how to apply the information presented.
- How were the effects measured?
Some pieces of research collect their data directly from the subjects via self-report. If the goal is to learn whether a certain chiropractic technique reduces neck pain, researchers may begin the study by first asking the subjects to rate their pain on a scale of 1 to 10. After the technique is applied, they would be asked to rate their pain again to see if it changed.
One problem with relying on self-reported data is the placebo effect. Participants may believe they should get a certain result, thus that is what they experience, even if the intervention didn’t directly result in that effect. Another issue is that people tend to forget things or remember things inaccurately. Both situations can lead to inaccurate reporting, which can decrease the validity of results.
Effects captured and noted by someone other than the subject are preferred. If the other person is “blind” or doesn’t know who received an actual intervention and who didn’t, this is even better, as it can further reduce the risk of researcher bias.
- How old is the study?
Just because a study was published years ago doesn’t mean it’s inaccurate. However, what we know today about a specific chiropractic modality or treatment can vary quite a bit from what we knew about long ago. If the research you’re reviewing doesn’t reflect upon or acknowledge some of the innovations and advances in this field, you may get an incomplete and sometimes inaccurate picture.
To avoid this, look for newer research on the topic. If you use research search sites like Google Scholar, you can filter the results based on the year they were published. Look at the articles that come up. Consider whether more recent studies back up the information in the older one or if their findings vary based on what has been learned since the previous one was published.
By paying attention to details such as these, it becomes easier to assess whether the results of a study may or may not be reliable and valid. Be willing to look at research critically, because what you see isn’t always what you get.
About the Sponsor
To learn more about Dee Cee Labs and their ongoing efforts to support and educate new chiropractic practitioners, visit https://www.dclabs.com/about.php.
CHRISTINA DEBUSK is a writer for Chiropractic Economics.