Sunday, October 12, 2008

Another flawed Lidcombe study

And yet another flawed study on Lidcombe. This is especially disappointing because the group is independent of the Australian group around Mark Onslow. The co-author is Barry Guitar, university professor and the author of a well-known (and in general well written) text book on stuttering called
Stuttering: An Integrated Approach to Its Nature and Treatment . However, his research and his presentations on temperament and stuttering are suspicious to me. He might be a good clinician but a not very good science mind unfortunately. I would guess the first author is an enthusiastic and bright graduate student who has effectively wasted her or his time with research that has little relevance.

 Am J Speech Lang Pathol. 2008 Oct 9.

Long-Term Outcome of the Lidcombe Program for Early Stuttering Intervention.

University of Vermont.
PURPOSE: To report long-term outcomes of the first 15 preschool children treated with the Lidcombe Program by speech-language pathologists (SLPs) who were inexperienced with the program and independent of the program developers. Research questions were: Would the treatment have a similar outcome with inexperienced SLPs compared to outcomes when implemented by the developers? Is treatment duration associated with pre-treatment measures? Is long-term treatment outcome affected by variables associated with natural recovery? METHOD: Fifteen preschool children who completed the Lidcombe Program were assessed prior to treatment and at least 12 months following treatment. Pre-treatment data were obtained from archived files; follow-up data were obtained from interviews and recordings completed after the study had been planned. RESULTS: Measures of stuttering indicated significant changes from pre-treatment to follow-up in percent syllables stuttered (%SS) and Stuttering Severity Instrument-3 (SSI-3) scores. Pre-treatment severity was significantly correlated with treatment time. Handedness was the only client characteristic that appeared to be related long-term treatment outcome. CONCLUSIONS: The treatment produced significant long-term changes in children's speech, even when administered by SLPs newly-trained in the Lidcombe Program. Treatment results appear to be influenced by pre-treatment stuttering severity.

Without having read the article itself, I can see several flaws:

1) A sample size of 15 is much too small. Either you do at least 100 or you do not do it at all! The number is especially high because of the natural recovery rate increasing statistical fluctuations.

2) They have not controlled for natural recovery rate. It makes the results appear much more positive that they really are, because some will recover within one year naturally anyway and the stuttering severity will automatically go down. Here is a simple example. I have 20 kids. Let's assume 10 would have recovered within one year without treatment. Before, they all stutter at 5%. After one year without treatment, only 10 stutter, and the average stuttering rate is 2.5% (10 kids at 0 and 10 kids at 5%).

3) They try to find correlations in data with a very small sample size. Their finding on handedness is most likely a fluke.

But interestingly, the abstract seems to suggest that some kids are still dysfluent (I would have to read the article which costs money to access). The existence of dysfluent kids is not affected by statistics. So we can say that Lidcombe is not the cure it was claimed. In fact, I heard from many other therapists that some kids do not become fluent.

9 comments:

Anonymous said...

Just asking for more of your view -

If the Aussie experts say Lidcombe is the cure, and the American experts concur, is it a cure?

Isn't Geetar one of the SFA sponsored experts? Have you ever contacted Jane Frasier of the SFA to get her expert opinion?

Dysfluency vs. Stuttering...What is that all about?

15 kids? What is up with that size research group?

Wouldn't you think natural recovery would figure into the cure research?

Isn't info re: handedness really really old school?

Why do you think they waste so much effort & time with research that has little relevance?

Greg said...

Hey Tom--I know we've gone round and round on this, but when working with a disordered population, large n studies really aren't possible. So it's really a matter of doing the best with what's available. If I can get 12 where I live, I'm entirely thrilled. If the study's statistical power is reasonable relative to the effect size, one can begin to make inferences--provided the design is well controlled. (Your comment about handedness is right on target, in this area.)

The best option available to us is the replication of studies with different sub-populations (age, gender, handedness, severity, etc...)

To me, it all comes down to what Bloodstein said--which I will paraphrase... Pretty much any therapy will improve stuttering. Anything. Anything that anyone creates at the spur of the moment will improve stuttering.

This, to me, suggests that we still have such a superficial view of the pathology that we've yet to really understand why what we do may have a temporary effect. We're still bumbling around an unlit room, blindly groping things without a frame of reference while trying to make interpretations.

Answers to stuttering will be found in other non-SLP disciplines. Either genetics or when the brain is successfully mapped on a cellular level.

Tom Weidig said...

1) You need a much larger sample size because the population is NOT stable. If you observe them for one year, and without any treatment many will recover or fluctuate significantly in fluency.

You CANNOT use the standard statistical concepts, because the baseline is moving!

You would see this very well, if you had a control group of 20 kids.

2) "it's really a matter of doing the best with what's available". No, no, no. I hear this argument times and times again. And I say either you have a decent size or you do not do it at all, because whatever you do below a certain sample size has no value.

Example: If you do not have enough water to save your flower, it is pointless of giving it some water. Of course in other situations giving a bit but not enough is better than nothing. But in research this is not the case.

Tom Weidig said...

Malcon:

>> If the Aussie experts say Lidcombe is the cure, and the American experts concur, is it a cure?

You are a strong believer in "experts". It sounds as "experts" are like the pope: their word is truth.

1) I know other experts who disagree with the statement.

2) They are experts in stuttering but not in statistics.

3) A recent paper showed that there are relapses. So it is not a cure.

>> Isn't Geetar one of the SFA sponsored experts? Have you ever contacted Jane Frasier of the SFA to get her expert opinion?

Again, when you say expert, what do you mean? I have a PhD in theoretical physics so I would say that I am far more expert in statistics than any clinicians by far.

They are overreaching their expertise area.

>> Dysfluency vs. Stuttering...What is that all about?

no difference.

>> 15 kids? What is up with that size research group?

it is too small to see something about the general population.

>> Wouldn't you think natural recovery would figure into the cure research?

Yes, you would, but it is not done at present.

>> Why do you think they waste so much effort & time with research that has little relevance?

Because their sample size is too small to say anything meaningful.

Greg said...

Tom--we're on the same team. I'm just saying that it's next to impossible for the average unfunded researcher to do stuttering research to your specifications. So under that perspective, then I guess I should just quit and become a copy repair man. I hear what you're saying, I understand (and appreciate) the scientific perspective. But in all honesty, large n studies in disordered populations is a reality that is out of the reach for most stuttering researchers.

So we've got two realities. One of scientific rigor; the other of practical logistics.

So what should I get into as my new profession? Long-haul truck driver? Bus driver? Cat-fish farmer? I'll take suggestions over at StutterTalk.com.

Tom Weidig said...

Hi,

I am sure you would be an excellent cat-fish farmer! ;-)

Or, you just focus on clinical work.

If I were you, I would focus on research projects that do not require large sample size. For example, theoretical work (bringing all the data together), or working together with other researcher so that you all have a larger sample, or focus on adults which require lower sample sizes. Or look at interesting case studies.

Well, if you do not have the necessary funding, I say that you have not convinced people that you have the ability to do good research. The reason is either because you do not have this ability, are unable to signal that you have the ability or because they have not the ability to correctly judge your ability!

Actually, I would argue that low sample size research is SERIOUSLY HARMING the research field.

Best wishes,
Tom

Anonymous said...

After much pondering -

Other Employment opportunities for the average underfunded researcher?

Outhouse Cleaner - because the average underfunded researcher is already putting out a load of crap.

A Garbage Collector - because the average underfunded researcher is already producing a ton of garbage.

Anonymous said...

"A sample size of 15 is much too small. Either you do at least 100 or you do not do it at all!"

If clinical researchers adopted this (your) attitude, it would preclude all possibility of conducting studies that compile results from several different sources into meta-analyses. For example, Kingston et al, (2003), 'Predicting treatment time with the Lidcombe Program: Replication and meta-analysis', International journal of language and communication disorders, Vol 2.

Tom Weidig said...

"If clinical researchers adopted this (your) attitude, it would preclude all possibility of conducting studies that compile results from several different sources into meta-analyses. For example, Kingston et al, (2003), 'Predicting treatment time with the Lidcombe Program: Replication and meta-analysis', International journal of language and communication disorders, Vol 2."

This is a weak argument:

1) Meta-analysis are very tricky by themselves.

2) You still have the issue that the individual trials are completely useless due to their small sample size. So are we writing "Please ignore this article due to small sample size and only use it for a meta-analysis?"

3) You have to handle the publication bias. "Unsuccessful" trials are very very hard to publish, so you cannot include null results in the meta.

It would be much better to do more robust trial and do 100 kids to ensure reliable statistics. AND THERE ARE PEOPLE WHO DO THIS AND DO NOT MESS IT UP.

Re the article quoted, never read it, but how can you predict treatment time given that natural recovery would also like to a recovery? It would be statistically very difficult to do this. Moreover, they are not testing Lidcombe because it is NOT a random control trial. Placebo or generic aspects of treatment could well do similar.