massage and bodywork professionals

a community of practitioners

A groovy research literacy thread - just a bit of reading each day towards enlightenment....

.....well, it may not be TOTAL enlightenment, but it might be a wee glow and a wee glow is better than total darkness.

The squirrel is in learning and sympathy mode.
I'm in sympathy mode because I can relate to people being confused and overwhelmed (since I've been there!).
There's an information overload and let's face it - sometimes it's just difficult to know where to begin. 

And so I've hatched a plan.

Every day over the next few weeks I'm going to add some reading material to this thread.  Just one piece a day. The material will vary and it will come from different sources, but the sequence in which I present them will hopefully make sense.  In some cases you might see some overlap in the information provided, but hey, I'm a big believer in repetition being the best way of learning.

At the end of the few weeks you will know what the p is all about.  Yes, the p.  The p is a big thing, believe it or not.  You will know what internal validity means.  Yes, that thing.  Sounds intriguing, doesn't it?  And best of all, you will have some level of ability to look at research with some level of discernment.  And that's the whole reason for the thread.

OK, so here's the first piece from Ravensara Travillian. 

Enjoy!




Views: 366

Reply to This

Replies to This Discussion

There were some huge points in that last article, weren't there?

Here are some things that I want you to ask yourself:
Do you know what the null hypothesis means?
Do you know what the phrase "to reject the null hypothesis means?"
Do you know what the term a "False positive" means?
And lastly "Do you know what "p<0.05" means?

If you have grasped these terms, pat yourself on the back. If you don't quite understand it,
read the article again (and possibly a few times).

Writing stuff down also helps:
Do you agree with what was written here?

OK - the next article is here.
Do you agree with what was written here?

Nope.
Oh good - someone is reading this!!!!
I don't either........:)

(I didn't write it, by the way - found it online!)

OK - I'm not going to say anything - does anyone want to give it the critical eye and say what's wrong with it? Just give it a shot and you'll get virtual brownie points from the squirrel just because it means you're thinking and you've been reading (and you'll be making me smile since it means that I'm not posting on here every day for nothing!).
Oh, good. I was thinking you'd want to know why *I* don't agree with it, but it would be more interesting to hear from someone else first, if someone wants to take a swing.

As way of encouragement, it is worth noting that there isn't a firm right or wrong answer - whether that chart is "right" or not is a matter of interpretation, at least to some degree. So let's hear someone's interpretation...
OK, so it looks like no one is biting.
Here's my take - the bottom of the sheet is fine and it's true that the lower the p value, the better. In fact, p<0.01 could be viewed as excellent. It's the top of the sheet that I question.
It just needs to be redone to where p>0.05 has no significance - it's just a question of where you "draw the line".
What do you say, Doc?
And on we plundered......After looking at that last article, hopefully people will be saying
"Methodology Rocks" and I have to admit, that's the squirrel's point of view. This is the most interesting part of any research and it's the part that people need to pay a lot of attention to in my view. Why? Because it's basically what makes research good, so-so or just plain bad.
Ravensara explains this well - and she addresses some of the questions that you will be asking yourself when you look at a study. Was there a predetermined protocol? Could the study be replicated? What about the statistical power of the study? Did they have enough information in the study for you to be able to critique it well (this is a biggie in my view and I'm suspect when information is left out, but that's just me!)?
Since I'm in learning mode too (and will be forever), what I can say is that the collection of questions that you will be asking yourself when you're looking at studies will probably always be growing and that's one of the reasons why it's cool. The other main reason is that methodology is pretty dang interesting.

Here's something to ponder:
Most of the time no one will ever tell you whether a study was "good" or not and even if someone does tell you one way or the other, you should question the claim.
It's up to you to make up your own mind.
BUT here's what everyone should think about.
If you don't have a critical view of research there is a chance the the wool could be pulled over your eyes. It would be very easy to sell a "here's research that shows that such and such works". How much of that do you think goes on in our lives, never mind our profession?
How much of it goes on in the media when you read or hear "Research has shown..."?
Could it be viewed as a form of marketing that is taking advantage of our own ignorance?
The Research-illiterate Niche - do you think that segment is on someone's marketing plan somewhere?
Considering the fact that there's *so* much covered on marketing on this site, maybe it's time to think about how you're being targeted. I don't want to be in the Research-illiterate Niche. Do you?

OK - crazy time:
You're now familar with the Null Hypothesis, but what about the Dull Hypothesis?


This next article touches on ANOVA (rule of thumb - High F means a low p)(And it's actually analysis of variance).
Here's the next article.
Just started reading this thread.....I'm taking a break at this point to go to work. Thank you so much for putting this up. I am constantly amazed at some of the info offered up by fellow therapists to their clients....believing it on trust alone because that's what they were taught in school. When I mention that some of these theories might or are not true (even when offering a place to validify my beliefs) I'm looked at as a crazy person or, even worse, someone not being true to my profession. Some, however, have come over to my side and are really interested. To me, it's a whole new way of approaching massage....and it is uncomfortable for some, but I am embracing it. I am getting some answers on questions I had in school (10 years ago) that were not answered sufficiently.
Vlad said:
That website was a bit heavy, wasn't it?
Nearly too much info?
I know how it is, but hey, if you bookmark it, you'll find it a useful wee resource for further on down the thread OR if you're a go-getter-proactive-I-wanna-know-this-stuff-now type of person, just go to the home page of that site and start going through it.

Hopefully now you are a little more clear on differentiating between causation and correlation.

This is a big thing in research and I hope the examples gave you some insight into why that is the case.

This post's read is a LOT EASIER to digest and it's by Martha Brown Menard.
Note the reference to cause and effect and this is the first introduction to the Evidence Pyramid! We will be referencing the evidence pyramid further in another post.
Yes, today it's a nice easy read.......but beware - things will change again soon.
THANKS, CHOICE!!!
I'm so glad you're reading this! I just hope people don't get put off by some of the lingo, but really if people just remember wee rules of thumb and know how to ask basic questions, then they're well on the road.
I know what you mean about being viewed of as a pariah when you question theories. I think it's a cultural issue within our profession and hopefully it will change to where people enjoy looking at things with a more critical viewpoint. We should be questioning and analyzing every theory and finding out if there is any truth to them - and that is being true to the profession.
Thanks again!
Vlad said:
OK, so it looks like no one is biting.
Here's my take - the bottom of the sheet is fine and it's true that the lower the p value, the better. In fact, p<0.01 could be viewed as excellent. It's the top of the sheet that I question.
It just needs to be redone to where p>0.05 has no significance - it's just a question of where you "draw the line".
What do you say, Doc?

Nope, totally disagree with you Vlad. The issue is using null hypothesis significance testing (NHST) as something more than just a line in the sand, which is all it is. It is categorical tool for determining if an effect exists, or does not exist. Interpreting the various p values as weaker or strong evidence for a hypothesis is misguided. (But despite this, researchers make this mistake all the time.)

Effect sizes and confidence intervals are much better tools for assessing the relative strength of an effect.
Really? So "the lower the p, the better" is wrong? Is it just a binary "Yay or nay" thing that people should consider?
If so, then some things I've read about it are misguiding.
Oh - but you have to give me the "line in the sand" - I did say it was a line and the line was definitely wrong in that paper.

I'm glad you're on here, Doc - you're keeping me right.
Exactly, it's a binary tool.

You can probably find some researchers who see it differently, but they're wrong. :)

NHST is a useful tool, but also has some pretty serious limitations. Effect size estimation, in conjunction with effect sizes, is a better approach in many ways, but NHST has been in use a long time and will not be easily replaced, even in instances where it should be.
I like knitting. Anyone want one knitted for a Christmas present, please friend me and place your order now:


OK, so I think this is ironic.
There it was....the build up to the p and what happens? Turns out the squirrel needed educated!
BUT I'm all for being corrected in public since it means that others might learn from my mistakes (so long as the mistakes aren't TOO embarrassing).

So that last article had a lot of good info in it and it hit on some key points:
Big ups to the Big Fs! Big Fs are good!
STAI and STAIC - the Cs are for measuring the anxiety in kiddos - poor babies.
POMS is a Profile of Mood States, not a thing that researchers use in research cheerleading (they probably use some other objects like bunsen burners or slide rules or something).

OK, so Doc brought up confidence Interval and for those that don't have a life, here's a link
and if you click on the link within that link to CI info, you will find a handy dandy little page for effect size calculations. And if you're so geeky that you're an embarrassment to everyone around you, you can go ahead and click on the link within the link within the link to the Effect Size Lecture Notes.
OK - wee rules of thumb with effect sizes (and Doc can keep me right on this).
If you see a wee d, that's Cohen's d and medium is round about .5, and anything over 0.8 is large (strong treatment effect). There's a scale with this one (UNLIKE THE P VALUE). For r values - anything over .371 is large and indicates a stronger treatment effect.

t-tests involve making sure that your wee cup of tea is at the right temperature.

This next article is pretty straightforward - it's about graphs and graphs are groovy.

Reply to Discussion

RSS

© 2024   Created by ABMP.   Powered by

Badges  |  Report an Issue  |  Terms of Service