Tuesday, November 08, 2005

How much is false?

I nearly finished this post and my browser crashed!!! I hate software (and of course myself for being so terribly human and not saving work...)

In my last post, I spoke about an article by John Ioannidis in which he proposes tell-signs for an increased likelihood of false research results. Before launching myself into a discussion on whether PDS research shares most of these tell-signs, I want to make you aware of another similar but PDS specific article by Finn, Bothe, and Bramlett. The title is Science and pseudoscience in communication disorders: criteria and applications. I met Patrick Finn once in Montreal, and once in Tucson (Arizona). I was at a conference on human consciousness (much more interesting than PDS) and we had lunch in a Greek restaurant. I never met Dr Bothe, but I know that she has written an article some years ago in which she basically said that most (or was it all?) research on treatment of PDS is false. So this article fits well into her agenda. :-) I will have a look at their article and give my feedback on the blog. But here is already the abstract:

PURPOSE: The purpose of this tutorial is to describe 10 criteria that may help clinicians distinguish between scientific and pseudoscientific treatment claims. The criteria are illustrated, first for considering whether to use a newly developed treatment and second for attempting to understand arguments about controversial treatments. METHOD: Pseudoscience refers to claims that appear to be based on the scientific method but are not. Ten criteria for distinguishing between scientific and pseudoscientific treatment claims are described. These criteria are illustrated by using them to assess a current treatment for stuttering, the SpeechEasy device. The authors read the available literature about the device and developed a consensus set of decisions about the 10 criteria. To minimize any bias, a second set of independent judges evaluated a sample of the same literature. The criteria are also illustrated by using them to assess controversies surrounding 2 treatment approaches: Fast ForWord and facilitated communication. CONCLUSIONS: Clinicians are increasingly being held responsible for the evidence base that supports their practice. The power of these 10 criteria lies in their ability to help clinicians focus their attention on the credibility of that base and to guide their decisions for recommending or using a treatment.

So, let me come back to Ionannidis' tell-signs. I will go through all of them, and discuss whether they are relevant to PDS research. Research is less likely to be true..

a) when the studies conducted in a field are smaller; YES. Most research studies have quite small sample sizes.

b) when effect sizes are smaller; NA. Actually most research does not include effect sizes! I even have the impression that many researchers are not aware of the effect size concept. Everyone seems to compute the statistical significance of their results (for example group difference between people with PDS and control subjects), but rarely whether the results like group differences are negligible or large. Crash course in stats: Statistical significance states how likely a result is not a coincidence but a true effect. Effect size states how large the effect is. Thus, you have have a statistical significance, but the effect might be too small to be interesting.

c) when there is a greater number and lesser preselection of tested relationships; sometimes YES. This situation does happen in some studies. For example, if all aspects of a subject are recorded and search for correlation with outcome. And the more variables you have the more likely you have a statistically significant result by chance.

d) where there is greater flexibility in designs, definitions, outcomes, and analytical modes; big YES. Due to the very nature of PDS, your creativity is your limit. Definitions are vague, e.g. severity of stuttering, success in therapy.

e) when there is greater financial and other interest and prejudice; NO and YES. Generally, there are no great financial interest in PDS research, except of course drug trails. Prejudice certainly exists, but not in the sense of ignoring blatantly obvious facts but more falling victim of logical fallacies and group thinking in the absence of blatantly obvious facts.

f) when more teams are involved in a scientific field in chase of statistical significance; MAYBE.

PDS research has many of these tell-signs. In my next post, I will tell you more about Tom's tell-sign of bad research articles.

No comments: