massage and bodywork professionals

a community of practitioners

The "How Do We Do It" Question and Big Placebo....

This one is going to be a long post since I'm trying to figure some things out and I'm just using this as a tool to do so. Hopefully some people will jump in with some information that might provide some enlightenment or education or even more questions (I'd actually also like to know if I'm not alone in my confusion!)

I'll start off by addressing what I call the FEP. What's the FEP? Well, it's my own acronym and it refers to the Famous Evidence Pyramid. Yep, I think it's famous. It's not likely to be featured in People magazine in a photo with Justin Timberlake, but it has been featured in other places that have a bit more brain fodder.

To anyone starting out trying to get their head around research, there.....CONTINUE

Views: 34

Comment

You need to be a member of massage and bodywork professionals to add comments!

Join massage and bodywork professionals

Comment by Allan J Jones on May 27, 2010 at 3:07pm
Research Methods should be a compulsory subject for all health care workers. If you cant understand the way a hypothesis is framed, you can never understand the validity or otherwise of 'evidence' brought to light. I just wouldn't want to have to go through it again....same with biostatistics :=))
Comment by Vlad on May 27, 2010 at 3:07pm
Ummmm...I think the see saw popped into my head on it's own. UNLESS you mentioned it when I'd a couple of glasses of wine in me and I can't remember it. Either way, it's there. You may have put it there, I dunno.
And yes, it's a good analogy.
Different researchers, different perspectives. I suppose the mixture is good, so long as there's not a huge imbalance (99% pro-external - poxy little 1% pro-internal). Don't suppose there's ever any way of finding that out - unless we were to research the researchers. Now that would be interesting.
Comment by Christopher A. Moyer on May 27, 2010 at 2:52pm
A see-saw is an even better analogy. (I think you might've gotten that from me while in Seattle!)

Different researchers place varying amounts of importance on internal or external validity depending on their perspective and background.
Comment by Vlad on May 27, 2010 at 1:47pm
Thanks for your comment, Doc.
You call the internal/external validity factor a two-sided coin. I have a picture in my head of it being like a see-saw in research. You can't have one side up and the other side up at the same time in one particular type of study. Mabye you can have some middle of the road experimentation where neither side is particularly high - it's more or less level, and that isn't a bad thing. My issue is that people might assume that external validity is all that matters and if the see saw is constantly having the external side high or if the internal validity side is rarely raised above a certain point, then that's a problem.

The importance of external validity was brought up by a researcher and I wonder if most researchers think like her. She knows that there's a see-saw in place and anyone involved in research knows it. So why stress it? If there isn't recognition that both are needed then it's a potential problem, I think. But then again, I'm just learning.
Comment by Christopher A. Moyer on May 27, 2010 at 11:39am
I think you've hit on something with your FEP acronym, because that Pyramid does seem to show up a lot when the author is trying to describe clinical research to practitioners. And the Pyramid is O.K., up to a point. I makes the important point that there are different levels of evidence, and that there are practical reasons why we tend to start small (e.g., case studies, pilot studies) before we can produce the capstone studies that provide the best evidence (e.g., systematic reviews).

But it is also true that the FEP is too simplisitic. It fails to emphasize the qualitative ways in which various research designs differ. I've been thinking about this lately in response to a question often raised by Julie Onofrio. On several of the internet sites we both frequent, she is asking when and how a single study can overturn the findings of multiple previous studies, as sometimes happens. I think I'm starting to understand why she (and undoubtedly others) are puzzled by this. It is because there is no formula along the lines of

100 case studies = 10 randomized controlled trials = 1 meta-analysis.

Indeed, in some cases it is possible for a single well-conducted experiment to be > a thousand, or a million, or any number of case studies. How can that be? The reason is that a well-conducted experiment has valuable features in it that address the weaknesses inherent in case studies and other forms of correlational research. (Much more can be said about this, but I'll try to keep it brief for now.)

Your point about internal and external validity is also a very important one in massage therapy research. I was at the panel discussion where external validity was enthusiastically applauded, and I can understand why; much of the audience consisted of practitioners who want to see the research resemble actual practice as much as possible. Indeed, this is important. But, in clinical research, this is a two-sided coin, the other side of which is internal validity. Internal validity is also important, and in one of my own presentations that took place later that day I expressed my own opinion that there must be massage therapy studies that maximize each (note that it is logically impossible to maximize both in the same study). We need studies that maximize internal validity (i.e., that maximize experimental control) to accurately determine precise cause-and-effect relationships. We also need studies that maximize external validity (i.e., that examine treatment under real-world conditions) to determine if therapy is effective, or how effective it is, in the real world.

© 2024   Created by ABMP.   Powered by

Badges  |  Report an Issue  |  Terms of Service