Monday, January 10, 2011

Looking at failure contains information

I suggested that we should look at failed cases as a quick and cheap way to get a sense for the efficacy of Lidcombe.
You're going to have a biased sample. If you want to determine the effectiveness rate, you'd need to ask for parents of children who've been treated with Lidcombe Program - regardless of their outcome. THEN see how many are still stuttering.
At the moment your methodology is like saying you want to hear from people who've had a recurrence of cancer, after a course of chemotherapy, and concluding that chemotherapy is an ineffective treatment.
Bias is not an issue, because I don't look at the global population.

If the people die or relapse after chemotherapy, it says something after the chemotherapy. And the more stories I hear, the more concerning. I never hear that someone dies of a nose job or appendix removed. There IS information is that information.
 
If Lidcombe is 100% effective, no kid will be stuttering any more. So if I find someone or a few, Lidcombe cannot be 100% effective.

Someone once commented, I think it was Peter Reitzes, that the adult stutterers are going to die out in Australia! We shouldn't have any stuttering teenagers any more.

Moreover, we can look at those cases, and look whether they are normal cases, i.e. parents were following instructions, and therapist was well-trained. If we have such cases who fail, we seriously need to ask whether it is effective.

I am just saying it's a cheap way to do proper research. Find and look at the failure. Therapist researchers mostly do research on what is successful.

Moreover, if I find 10 failed cases, I should have at least 90 successful cases. We then have a success rate of 90%, which is a bit above natural recovery.

5 comments:

Anonymous said...

"Bias is not an issue, because I don't look at the global population."

Bias will be an issue for you SPECIFICALLY because you are asking for a sample, and not assessing the global population. To quote Wikipedia on the subject of bias: "In statistics, bias is systematic favoritism present in data collection, analysis or reporting of quantitative research." You are collecting data, and therefore you'll need to address the issue of bias. In this case, you are specifically asking for cases of failed Lidcombe Program. Therefore, your sample will be biased towards children who had an outcome of failed therapy.

"If the people die or relapse after chemotherapy, it says something after the chemotherapy. And the more stories I hear, the more concerning. I never hear that someone dies of a nose job or appendix removed. There IS information is that information."
I'm unsure what you're trying to demonstrate with this paragraph. It is irrelevant that the analogy used chemotherapy - the discussion is on research methodology in health sciences.

"If Lidcombe is 100% effective, no kid will be stuttering any more. So if I find someone or a few, Lidcombe cannot be 100% effective."
You are positing that it has been claimed that Lidcombe Program is 100% effective. You will find no published research that says this. So you are wasting people's time asking for cases of failed Lidcombe Program when it is already established that Lidcombe Program is not 100% effective.

"Someone once commented, I think it was Peter Reitzes, that the adult stutterers are going to die out in Australia! We shouldn't have any stuttering teenagers any more."
Again, I'm unsure what you're trying to achieve with this paragraph. In any event, as stated before, it is known that Lidcombe Program is not 100% effective, so of course there will be teenagers who stutter. Lidcombe Program aside, teenagers could also stutter due to late onset (i.e. they didn't start stuttering until they were a teenager) or because treatment was not sought in the pre-school/school-age years.

"Moreover, we can look at those cases, and look whether they are normal cases, i.e. parents were following instructions, and therapist was well-trained. If we have such cases who fail, we seriously need to ask whether it is effective."
Please define a 'normal case'. This will be important when analysing your data. It needs to be defined BEFORE the data is analysed, otherwise the parameters of your analysis could be adapted to suit the outcome you're hoping for.

"I am just saying it's a cheap way to do proper research. Find and look at the failure. Therapist researchers mostly do research on what is successful."
The methodology you describe is not what is commonly accepted as 'proper research' by anyone's standard's - including your own, as I've been given to believe from reading your blog. 'Proper research' at least in quantitative studies/analysis involves a large sample size - I do not think 10 children qualifies as large. You need to get hundreds, or it does not demonstrate anything other then "here are some people for whom Lidcombe Program failed". It has already been demonstrated that Lidcombe Program can fail. So this will not add anything new to the body of research. Additionally, the sample will be biased (as discussed above) - not proper research once again.

"Moreover, if I find 10 failed cases, I should have at least 90 successful cases. We then have a success rate of 90%, which is a bit above natural recovery."
I don't understand how this fits into your current methodology. If you are only asking to hear from failed Lidcombe Program cases, how will you determine if there are 90 successful cases out there? How will you define a successful case? How will you define a failed case?

Would love to hear your thoughts.

Ora said...

Moreover, if I find 10 failed cases, I should have at least 90 successful cases. We then have a success rate of 90%, which is a bit above natural recovery.

I don't understand your logic: "I should have at least 90 successful cases". That depends upon the rate of success in treatment, obviously. But the success rate is an unknown.

How are you assuming a population of 100 when you're looking at a sample of 10? Why not assume a population of 1000? Then you could say "If I find 10 failed cases, I should have at least 990 successful cases, for a success rate of 99%." You can find any success rate if you just make arbitrary assumptions.

I don't get it.

You're certainly right that you can disprove the claim that Lidcombe is 100% effective if you find even one instance of failure. But I don't see how that method can quantify the success rate.

Tom Weidig said...

@Ora:

What I meant is that if I find 10 failed cases, then there MUST be at least 90 successful cases for there to be a success rate of 90% or more.

I never said that the method can quantify the success rate, because one needs to look at an unbiased sample.

Tom Weidig said...

@Anonym:

>> you are specifically asking for cases of failed Lidcombe Program. Therefore, your sample will be biased towards children who had an outcome of failed therapy.

My sample is not biased towards.. it IS the children who had an outcome of failed therapy.


>> "If the people die or relapse after chemotherapy, it says something after the chemotherapy. And the more stories I hear, the more concerning. I never hear that someone dies of a nose job or appendix removed. There IS information is that information."
I'm unsure what you're trying to demonstrate with this paragraph. It is irrelevant that the analogy used chemotherapy - the discussion is on research methodology in health sciences.

I am saying that the more stories of failure I hear, the more suspicious I become about treatment claiming to be very effective.



>>> "If Lidcombe is 100% effective, no kid will be stuttering any more. So if I find someone or a few, Lidcombe cannot be 100% effective."
You are positing that it has been claimed that Lidcombe Program is 100% effective. You will find no published research that says this. So you are wasting people's time asking for cases of failed Lidcombe Program when it is already established that Lidcombe Program is not 100% effective.

1) I dont. I just say that if we find one case, we know it is not 100% effective.
2) Lidcombe does claim that the treatment is very effective (I remember that 2-3 years I heard Onslow say that all recovered), so finding failed cases is an indication than the effectiveness has limits.


>>>"Someone once commented, I think it was Peter Reitzes, that the adult stutterers are going to die out in Australia! We shouldn't have any stuttering teenagers any more."
Again, I'm unsure what you're trying to achieve with this paragraph. In any event, as stated before, it is known that Lidcombe Program is not 100% effective, so of course there will be teenagers who stutter. Lidcombe Program aside, teenagers could also stutter due to late onset (i.e. they didn't start stuttering until they were a teenager) or because treatment was not sought in the pre-school/school-age years.

Yes, there could be teenagers and those not getting therapy. But we should see a clear dent in the cases of adult stuttering.


>>>"Moreover, we can look at those cases, and look whether they are normal cases, i.e. parents were following instructions, and therapist was well-trained. If we have such cases who fail, we seriously need to ask whether it is effective."
Please define a 'normal case'. This will be important when analysing your data. It needs to be defined BEFORE the data is analysed, otherwise the parameters of your analysis could be adapted to suit the outcome you're hoping for.

A normal case is one where parents are cooperative and work with their kids, and there are no additional development or psychological issues.

Tom Weidig said...

>>> "I am just saying it's a cheap way to do proper research. Find and look at the failure. Therapist researchers mostly do research on what is successful."
The methodology you describe is not what is commonly accepted as 'proper research' by anyone's standard's - including your own, as I've been given to believe from reading your blog. 'Proper research' at least in quantitative studies/analysis involves a large sample size - I do not think 10 children qualifies as large. You need to get hundreds, or it does not demonstrate anything other then "here are some people for whom Lidcombe Program failed". It has already been demonstrated that Lidcombe Program can fail. So this will not add anything new to the body of research. Additionally, the sample will be biased (as discussed above) - not proper research once again.

My aim is not to get a percentage of recovery. My goal is to study the failed cases. The more there are, the less likely the intervention is truely effective. Moreover, if an analysis of the failed cases showed that the procedures were properly followed and no other issues present, then the intervention method did not work which points to weaknesses in the method itself. If we find that many of the cases are because parents do not work with their kids or due to other issues, we can conclude that the issue is in implementation.


>>> "Moreover, if I find 10 failed cases, I should have at least 90 successful cases. We then have a success rate of 90%, which is a bit above natural recovery."
I don't understand how this fits into your current methodology. If you are only asking to hear from failed Lidcombe Program cases, how will you determine if there are 90 successful cases out there? How will you define a successful case? How will you define a failed case?

No, I am saying that there must be at least 90 for the 10 failed ones, because else the recovery rate is lower than 90%, and that's hardly an effective therapy.