massage and bodywork professionals

a community of practitioners

I was assigned to read this article for one of my classes and thought it was very interesting. No, the article does not have anything to do with massage at all. But, for those who are interested in the research aspect of massage, other CAM, or just those random articles about anything that spout statistics, this might help you to understand research/statistical literacy. Trust me, the article is more interesting than it sounds. There may be a few terms you do not understand, but on a whole I think it does a great job of giving a topic and then providing a very clear example of what it is trying to say. So, hopefully some people here will get something out of it!

Views: 106

Attachments:

Reply to This

Replies to This Discussion

Ya, descent article; well explained points.

You might post it on Rosemary's research site as well.
Good idea (and done!) :) I think I'll post it on Bodhi's as well..

Robin Byler Thomas said:
Ya, descent article; well explained points.

You might post it on Rosemary's research site as well.
Thanks Robin for the suggestion and thanks Kim for posting this.
I've still to read it since I've been up to my eyes with work, but when I do I'll comment on it. Plus I know I'm going to have to read it 3 times since it usually takes that before it sinks in!

Statistics is one of those things that probably most therapists aren't that interested in or drawn to. I think we've a habit of just keeping with the wordy part and not looking at the stats and it's another one of those areas where we trust what we're being told in the stats.
In one way it's easy to pull the wool over someone's eyes if they're not statistically literate, but the thing that I wonder about is how they can slip by a peer review. With the written part there may be omissions in the documentation with conforms and biases that might be hard to find, but with the stats? I would have thought that it would have been hard for it to get flaws past a review (if it's a good review).

Kim, since you're studying this cool stuff, are you finding that there's a huge percentage of flaws in the statistical analysis to the degree that you nearly have to presume that there will be flaws when you're critiquing studies? Or am I being a pessimist here?
Vlad I think you're right that a lot of people are turned off by stats because it's an entirely different language. The thing that I liked about this article is that it's not talking about specific analysis or tests, but more just learning to read between the lines and understand the verbal language around the numbers. It really is worth a read for anyone interested in stats and research, the article doesn't really talk much about numbers at all, just interpreting conclusions and study design.

For example, this is one example they give about association vs. causation.

"Consider three claims about the results of another observational
study.
1. Juveniles who watch more TV violence are more
likely to exhibit antisocial behavior.
2. TV violence is positively associated with antisocial
behavior.
3. If juveniles were to watch less TV violence, they
would exhibit less antisocial behavior.

All too many readers mistakenly conclude if #2 is true,
then #3 must be true. But the difference between #2
and #3 is the difference between association and causation.
In an observational study, the truth of #2 is evidence
for the truth of #3; the truth of #2 is not sufficient
to prove the truth of #3."

The meaning here is, just because there are associations (points #1 and #2) does not mean that #3 is true- other factors could be influencing tv watching, violence, and antisocial behavior (such as family stability or instability, parenting, growing up in inner city Detroit vs. your quaint little rural community etc).

Vlad, here is a section on determining good vs. bad. It's rather lengthy but I feel worthy of posting the full section.

"THE QUALITY OF A STUDY
To be statistically literate, one must be able to distinguish
an observational study from an experiment
.
In an experiment, the researcher has effective physical control
over which subjects receive the treatment; in an
observational study, the researcher has no physical
control over who receives the treatment. Those who are
statistically illiterate may mistakenly presume a study is
an experiment if it involves any kind of treatment, if it
involves a control group, or if it involves measurements
that are objective. They may mistakenly presume a
study is an observational study if it involves a survey, if
it lacks a control group or if it involves measurements
that are subjective (a self-report of things that are unobservable
such as one’s feelings or values).

To be statistically literate, one must be able to distinguish
a good experiment from a bad one.

When they are told the subjects were randomly assigned to the
treatment and control groups (as in a clinical trial),
readers may mistakenly conclude this study must be an
experiment: a good experiment. But if the subjects in
this study have knowledge of the treatment then their
informed behavior may transform a good experiment
into a bad one.

For example, consider an experiment that indicated
magnets decrease pain. Fifty subjects having pain associated
with post-polio syndrome were randomly assigned
to two groups: the treatment group received concentric
magnets; the controls received inert placebo
'magnets'. A major decrease in pain was reported by
75% of those in the treatment group -- 19% in the control
group. [Natural Health, August, 1998, page 52.]
How strongly does this result of this study support the
claim that magnets decrease pain?

A statistically literate analyst would investigate the possibility
of bias introduced by the Hawthorne effect: the
change in behavior in those subjects who were aware of
receiving the treatment. Could these subjects have detected
whether they had a magnet or not? And if the
researchers weren’t double blinded about which subjects
received the real magnets, could the researchers
have inadvertently communicated their knowledge to
the subjects? If the researchers weren’t double-blinded,
perhaps there was bias from the halo effect: seeing what
the researcher wants to see. Perhaps the researchers
inadvertently allowed their knowledge of whether or
not the subject had a magnet to ‘push’ a subject’s borderline
response into the desired category.

Consider the quality of another experiment. A group of
homeless adults were randomly assigned to either an inpatient
program or an outpatient program. Obviously
the subjects knew of the treatment. Their informed
behavior may generate a Hawthorne effect: a modification
of the subject’s behavior owing to their awareness
of some aspect of the treatment. In this case, those
homeless who were assigned to the in-patient program
were less likely to enter the program than those who
were assigned to the outpatient program. A differential
non-response in participation can create an associated
bias in the results. And even if the same percentage
failed to show up in each group, their informed knowledge
of which group they were in may create a nonresponse
bias in the observed results. This experiment
may have been seriously compromised by the informed
non-response."

Statistical Literacy: Thinking Critically about Statistics
Milo Schield

As far as why poor quality studies still get published, it depends on the journal. The higher-tier the journal, the more submissions they have to choose from, and the less likelihood of a poor quality study getting published. A lower-tier journal may be more desperate for entries, and might overlook errors or less than stellar quality in order to fill the journal. So that is always something to consider. For the reader, you just need to educate yourself on certain things and be a critical reader, even if you like or want to believe what you are reading because it goes along with your pre-existing beliefs.

I'll write more now, I have to go to school now.
Vlad said:
Thanks Robin for the suggestion and thanks Kim for posting this.
I've still to read it since I've been up to my eyes with work, but when I do I'll comment on it. Plus I know I'm going to have to read it 3 times since it usually takes that before it sinks in!

Statistics is one of those things that probably most therapists aren't that interested in or drawn to. I think we've a habit of just keeping with the wordy part and not looking at the stats and it's another one of those areas where we trust what we're being told in the stats.
In one way it's easy to pull the wool over someone's eyes if they're not statistically literate, but the thing that I wonder about is how they can slip by a peer review. With the written part there may be omissions in the documentation with conforms and biases that might be hard to find, but with the stats? I would have thought that it would have been hard for it to get flaws past a review (if it's a good review).

Kim, since you're studying this cool stuff, are you finding that there's a huge percentage of flaws in the statistical analysis to the degree that you nearly have to presume that there will be flaws when you're critiquing studies? Or am I being a pessimist here?
Kim,
I read it and yes, it's a great article. When I first saw that it was a doc on statistics I thought it was going to be something different entirely - it's good.

I think the way it highlights the association versus causation aspect is really well explained. I know that's an area in which I can get confused (or should I say "mislead"? :) ) easily. I think that has to do with people's preconceptions or what they want to read (or find or believe).

Also, the part where it goes into interpreting stats:
"Is this statistic true?'
"is this statistic representative?"
"Is this statistic factual or inferential?"

It also links it into the importance of identifying conforms/bias problems - cool.

Yep, well worth a read.

(Just as a wee sidenote. I smiled at the first table and said to myself - "Why can't everything just be in that "exclusively deductive" column? Things would be so much easier if everything was just a 1 or a 0" :) )

Have you dug into Cohen's d stat, eta-squared and the like? Heavy stuff.

Reply to Discussion

RSS

© 2024   Created by ABMP.   Powered by

Badges  |  Report an Issue  |  Terms of Service