Monday, May 18, 2009

Selection bias in trial?

>> I was not allowed to participate because I didn't stutter enough during the videotaped speaking portion. 

Some readers are telling me that they were not allowed to register because they did not stutter at the evaluation session. How will this affect the trial?

First of all, I speculate that there are two types here. The one who stutter very mildly even when they stutter, but the impact might be mostly on the psychological pain side. No issue to the outside world but only to themselves. The other type stutter or block in some situations but were just very fluent. Outside world classifies them as stutterers in these situations.

I am concerned by the following statisticall bias. Lets assume Type II is fluctuating in fluency a lot. Lets say they stutter 20% of the time and don't 80% of the time. So the trial will drop 80% of them. OK lets change the numbers to 50% and 50%. It is easier to compute. So Type II-accepted will stutter at registration and in 50% of the time at evaluation after trial and 50% not. So 50% have gains in fluency not coming from the medication. But because we have a control/placebo arm, both arms will see the gains. So more sucess in placebo arm and treatment arm. The other 50% have no gains from this effect. This effect is balanced out if we include the Type II-not-accepted, because 50% might stutter giving a loss in fluency in both arms and the other 50% do not stutter with no effect. So to conclude by dropping the fluent stutterers, they increase the placebo and treatment effect.


Anonymous said...

Hey Tom (and the guy who wrote the question),

This is standard practice in stuttering research. And Tom pretty much nails it; it's for statistical reasons.

Stuttering is such a variable disorder, and trying to quantify it is an exercise in flawed futility. So you have to show a treatment effect that is larger than the noise-variation (i.e., day-to-day variation in stuttering frequency).

Further, a lot of people operationally define the stuttering phenomenon by saying that it is the production 3% or greater moments of "stuttering" during speech.

So in short, if patients with mild overt severity are included in stuttering research, it screws up the statistics because the treatment effect is lost within the variability that *is* stuttering. It stinks, but it's our reality. So as a result, I (along w/ many others) have a 3% overt stuttering frequency inclusion criteria--which is enough to show a reliable treatment effect, should one exist.


Anonymous said...

It is possible for a very mild stutterer to become a severe stutterer in some situations or later in life. Some mild used to be severe. So they should be included in the research.

So....Greg, you know (and you are telling us it is wrong) it is wrong and you are still doing it. Ex. Speed limit...

Do as I say, don't do as I do.

>I (along w/ many others) have a 3% overt stuttering frequency inclusion criteria--which is enough to show a reliable treatment effect, should one exist.

Anonymous said...

Hi Anonymous Coward,

Your inner-turmoil is causing you to miss the point. People w/ less than 3% stuttered syllables aren't being excluded from treatment; it's just counter-productive to include them in studies with certain types of statistical analyses.

The variable nature of stuttering leaves us no alternative *but* to have some kind of inclusion criteria on certain types of study designs.

If we didn't use a reasonable 3% stuttered syllable inclusion criteria, then nearly all our studies would have significantly reduced power, thereby obscuring any treatment effects. What little progress there *has* been would be further reduced.

The only way to get around this is to have HUGE samples, thereby employing the statistical regression effect. But this is also impossible, given that stuttering subjects are so tough to come by--given their 1% (or less) societal prevalence.

These are just studies... If/When treatments come out of the research phase, they'll be made available to all.

I've long said that the measurement of stuttering behaviors is a goofy way to quantify the stuttering phenomenon. There are *way* too many intervening variables that have a temporary influence over overt stuttering behaviors. The way to do it is w/ a 27/7 stuttering monitor, as suggested by Tom and Dr. Maguire. (However, this would be really tough to implement as well.)


Pam said...

I remember when I went toi NIH in 2006 to participate in a PET scan study and some kind of tongue test. They paid for me to fly there, put me up in a hotel, meal, everything. I registered when I arrived, did all I was supposed to do.
When I got to NIH in the morning of my study, after eval, they said I wouldn't be ableto participate after all because I didn't stutter enough. ( I had been screened over phone and stuttered pleny,like I always do on phone - part of the complexity of stuttering, is it not). I was floored, but went along with it. They were the experts after all.
I went down to cafe, ate lunch, and began to see about getting an earlier flight.
Then I was paged to come to lab.
Turns out, they had re-read the protocol,and decided I did fit criteria after all. So I did all the stuff, and they sdaid I could do everything except the tongue sensor thing. Which waas fine with me, becasue it sounded weird anyway.
Becasue of all this, I missed my van back to airport, and they had to call a cab and pay like triple to get me to airport in time.
I went back once more, again all expenses paid, and after a pregnancy test, and all PET pre-stuff, the scanner was broken.
I was qualified for the Pagaclone trial, but because it required 8 office visits, and the closest place to me is 3 hours away,a nd they are not open on Saturdays, I declined.

Anonymous said...

Greg, if attacking me and labeling me as a Coward makes you feel good, then so be it.

If you said the way to measure stuttering is FLAWED, then why are you still doing it. (Important question). If it is wrong (you admitted it), then you are contributing to bad research. That is the bottomline: it is like falsifying data, a Big NO NO). If the method is Wrong, then whatever data or conclusion you get is Worth ZERO (Can't be used).

How about recording a stutterer and a fluent control 24/7. And using a software program to analyze the speech samples (lots of data but the power of computers).

Go do it! Stuttering researcher = Not a real Scientist.

What are your qualifications as a Stuttering Scientist. Tom has a PhD in Physics, what science classes did you take in high school or College (Psychology??)

What new ideas have you contributed recently?

>I've long said that the measurement of stuttering behaviors is a goofy way to quantify the stuttering phenomenon. There are *way* too many intervening variables that have a temporary influence over overt stuttering behaviors.

Anonymous said...

Anonymous Coward is a term used on, which is a tech bulletin board. So that's the term's origin. Secondly, I have no respect for people who make snide and derogatory comments about others and aren't man enough to disclose their identity.

While in tenure track, the number of publications are important, and there's literally not enough time to reinvent the wheel relative to new ways to quantify the stuttering phenomenon. These types of major advances are designed for after tenure. So talk to me in 18 months...

Your challenging my knowledge; if you're really interested, you'll do a simple lit search and find at least 5 peer-reviewed pubs since Jan. 2 of which detail legitimately new findings in the stuttering enhancement phenomenon that have never been previously documented.

Anonymous said...

Wait, what happens when people's publications get rejected by anonymous reviewers?

>Secondly, I have no respect for people who make snide and derogatory comments about others and aren't man enough to disclose their identity.

A double standard, don't you think. Give me some time to read your publications....don't waste my time....

Will check back with you in 18 month. You are just publishing so you can get tenure, correct?

Will talk to you again in Dec. 2010. Will shut up now....

Anonymous said...


Seriously, the conclusion was too short? Don't you think?

4.7. Summary and conclusions
Data from this study reveal that self-generated visual feedback, in either its synchronous or asynchronous forms, significantly enhanced fluency in those who stuttering. Accounting for this phenomenon is difficult in that competing theories on fluency-enhancement, such as the engagement of mirror neuron networks ([Kalinowski and Saltuklaroglu, 2003a] and Kalinowski and Saltuklaroglu, 2003b J. Kalinowski and T. Saltuklaroglu, Speaking with a mirror: Engagement of mirror neurons via choral speech and its derivatives induces stuttering inhibition, Medical Hypotheses 60 (4) (2003), pp. 538–543. Article | PDF (104 K) | View Record in Scopus | Cited By in Scopus (6)[Kalinowski and Saltuklaroglu, 2003b]), the EXPLAN model (Howell, 2002), and the DPSH ([Alm, 2004], [Alm, 2005] and [Snyder, 2004]), appear to have theoretical frameworks that are potentially capable to integrate these data into their models of stuttering and fluency-enhancement. Clearly, further research is needed to clarify the nature of stuttering, speech feedback and fluency-enhancement.

Eric said...

I just left a reply in the article a few down from here, but I, as well, did not qualify, since I didn't stutter enough.

I just happened to be very fluent that day and when I did have an issue, it was a brief block, that is more difficult to quantify than a repetitive stutter.

I was a little disappointed, but oh well. At least I gave it a shot.

Anonymous said...

Stuttering is measured as a syndrome with covert and over components. We tend to measure overt aspects of disorders while gathering experiential information about changes in avoidances, substitutions, ease of speech, speech naturalness, etc. There tend to be a high correlation between overt and covert aspects of stuttering so many assume a linear relationship in these experiential components of stuttering.

While these aspects tend to covary in many, this is not always the case.

The measurement of stuttering via syllable count is a crude approximation of stuttering severity. No one would debate that with much vigor. The greater problem in not the exclusion of mild stutterers in research , but the measuring of stuttering in a reliable and valid manner stuttering as experienced by the PWS and the listener.