Tuesday, October 28, 2008
What is coming up?
Not much is happening at the moment. Here are a few future events that should happen at some point. In genetics, Drayna's team is still working on tracking down genes in stuttering families and the stuttering population. I would expect results within the next year. In brain imaging, we should soon hear more from Ingham and Fox's team, and the groups in Oxford (Watkins) and in Paris. And the Finish group with new scanning technology. And others. And in treatment trial, Franken's group is comparing Lidcombe and an alternative, demand and capacity treatment. They have now 66 kids registered and are going to 100 or more I believe. But it will take more than a year before the first results I would guess. Then Ingham's group is also running a trial on their treatment for adults. And Onslow's group is running a study on kids before they start stuttering.
Sunday, October 26, 2008
Placebo or not?
Ora from New York City sent me an interesting article on the use of placebo by doctors: see here. The article explains that many doctors are using placebo (pills without any effect on the condition) to improve patients' well being by exploiting the placebo effect. The placebo effect is quite powerful. For example, let us take 900 patients with chronic headaches and divide them in three groups and give them either: nothing, a pill that does not contain an active compound, or Aspirin / Paracetamol. The winner is Aspirin / Paracetamol, but the second place is not shared but won by the placebo pill even though it contains no active compound. How is this possible? Here are possible explanations: you convince yourself that you feel better and override your body's feedback (it is a bit like putting your headphones on mute for 20 seconds when your girlfriend or parent launches into a tirade! :-), you feel better and more confident and your brain might release pain-killing neurotransmitters (putting yourself on drugs or natural painkillers), and so on. Placebo works better for some conditions than others.
An interesting challenge is the following: so if placebo works well, should doctors not use placebo, i.e. tell their patients that a pill works and then it works! Most alternative medicine is probably due to the placebo effect: you go to the practitioner, s/he talks to you, makes you feel better, and then gives you a placebo which he and you of course genuinely believe is working. And in the end it really helps. That's the paradox.
How about stuttering? There is a clear effect in the drug trials. I am convinced it plays a role in altered auditory feedback devices and conventional speech therapy. So does this mean that we should not use them? Someone could argue: Well they give more fluency, so who cares that it is placebo! My answer would be: yes short-term fluency gains but not long-term. And that's the key issue: placebo works well in the short-term.
An interesting challenge is the following: so if placebo works well, should doctors not use placebo, i.e. tell their patients that a pill works and then it works! Most alternative medicine is probably due to the placebo effect: you go to the practitioner, s/he talks to you, makes you feel better, and then gives you a placebo which he and you of course genuinely believe is working. And in the end it really helps. That's the paradox.
How about stuttering? There is a clear effect in the drug trials. I am convinced it plays a role in altered auditory feedback devices and conventional speech therapy. So does this mean that we should not use them? Someone could argue: Well they give more fluency, so who cares that it is placebo! My answer would be: yes short-term fluency gains but not long-term. And that's the key issue: placebo works well in the short-term.
Saturday, October 25, 2008
Not much happening
Not much has been happening over the last months that increased our understanding of the stuttering brain. As I mentioned before, scientists are hitting the complexity barrier with the new research avenues, namely brain imaging and genetics. The easy part has been done. It is one thing to put someone in a scanner and report functional or structural differences, but it is another to devise a experimental setup that can falsify or confirm a theory on stuttering. The same is true for genetics: we now know that genes are involved in stuttering in many cases, and we even have located the chromosomes in some cases. Even if we then know the genes, we again hit the complexity wall; it's like we know the killer but not who he (or she! :-) killed and why. In fact, very few scientists (and I am talking about the professional ones and not clinicians-turned-researchers) are well equipped to handle this situation. Many are trained to work well within the experimental paradigm (i.e. how to find the genes or how to scan and interpret the findings), but stuttering is a muddy territory where you need to fine-tune your methods appropriately to the idiosyncrasies of stuttering.
Tuesday, October 21, 2008
Lidcombe treatment of choice? ROUND II
And again on Lidcombe trials at Kuster's ISAD site on the question/answer page of Susan Block's article. Ann Packman, co-author of the Lidcombe trials, writes:
Hi Sue, nice article. I would like to clear up some misconceptions that have been posted about the Lidcombe Program randomised control trial. The trial was reported by Jones et al. (2005) in the British Medical Journal. There was a significant treatment effect after 9 months, compared to the no-treatment control group. The study was conducted according to CONSORT guidelines (see http://www.consort-statement.org/ which specify the appropriate methods and analyses for reporting trials in medical journals. They have been in existence for over 15 years. The study is replicable, as is the Lidcombe Program. As for the 5-year follow up study (Jones et al. 2008) of the children in this trial, it is indeed the case that three of the children were found to be stuttering again, after at least two years of fluency. This tells us that: (1) For these children the initial improvement in stuttering was apparently due to the treatment, not natural recovery (2) These children were at least spared the social penalties of stuttering for some of the early school years (3) At time of discharge from treatment, SLPs need to advise parents to be vigilant in the long term and to contact a SLP and/or re-instate treatment at the first signs of the re-appearance of stuttering (4) Further research is needed to develop better ways of maintaining Lidcombe treatment effects. Without this long-term follow up study, we would not have this important new knowledge about the nature of stuttering and about the need to work to further improve Lidcombe outcomes. AnnSusan Block replies:
Hello Ann, thank you for this response. Your comments show exactly how attention to scientific principles facilitate the evidence base for our profession - but also how they can frustrate some people!Does she actually refer to me or others who are trained scientists??? Anyway, I reply:
Most people are NOT frustrated by attention to scientific principles. I am frustrated about the poor application of scientific principles to therapy outcome research like conflict of interest (proving your own treatment), passive understanding and robot-like application of statistics, leaving out subtelties, and repeating of statements and deference to authority instead of engaging in counterarguments when challenged on the strength of evidence. I will show that EVERY single of Ann's sentences (to "clear up misconceptions" according to her words) are inaccuracte. (1) "the Lidcombe Program randomised control trial". Let's be clear which kind of RCT it is. It is not a double blind RCT, the highest standard, that allows to check whether it is the treatment itself that is sucessful. It is an open-label trial with the big disadvantage that even if the treatment arm shows a higher sucess rate you cannot say whether it was the placebo effect (the fact that the kids/parents had treatment), generic feature of ALL early intervention treatments (like parent-child interaction, easing parents' stress, adaptation to the treatment setting), or actually Lidcombe-specific feature. So you are NOT actually testing Lidcombe specifically but the whole package (placebo, generic and specific)! Moreover, the randomization was broken after 9 months and was not present in the long-term data. And let's note that 9 months is from the start of the treatment and NOT from the end of the treatment. Finally, the sample size was too low for randomization to equalise the two groups: as I discuss in my rapid response to Jones 2005 in BMJ. These arguments were confirmed and mentioned by Roger Ingham's group as he told me when I met him. So calling it a Lidcombe RCT looks very scientific but is a misnomen really! (2) "The trial was reported by Jones et al. (2005) in the British Medical Journal." It is irrelevant whether it appears in BMJ or anywhere else. It does not add to the debate, and only fallaciously implies "BMJ is a really good journal so the trial must be really sound". Moreover, you are not mentioning that I wrote a rapid response in BMJ criticising the statistics or if you think I as a PhD physics have no clue you could at least mention other critical feedback. (3) "There was a significant treatment effect after 9 months, compared to the no-treatment control group." As I said the stats are wrong. And again, 9 months after the start of the treatment, but not 9 months after the end of the treatment. I just re-read the article and you are writing that the kids are still in treatment! The relevant time period is starting at the end of treatment. ANY behavioural therapy will produce gains: diets, drug, giving up smoking. The important part is the relapse. (4) "The study was conducted according to CONSORT guidelines (see http://www.consort-statement.org/ which specify the appropriate methods and analyses for reporting trials in medical journals." First, these guidelines are for standard situations, but early intervention is very different because you have the natural recovery that distorts statistics and therefore you need many more kids to create truely balanced groups via randomization. You stopped at 47 kids rather then the 100 which would have improved the statistics dramatically. In fact, you had your design at 100. Why? Second, even if the guidelines are correct, it does not imply that the implementation of the guidelines was done correctly! Kids dropped out, you gave up the control group, you changed the sample size. (5) "They have been in existence for over 15 years." That is so symptomatic of bad thinking i.e. deference to some authority. I do not care how many years something is in existence. I only care about the strength of arguments. To show you how strange this is. I could argue: Well if it is 15 years old, it is too out-dated and should not be trusted! You might convince non-scientists but you cannot conduct a debate with such pseudo arguments. (6) "As for the 5-year follow up study (Jones et al. 2008) of the children in this trial, it is indeed the case that three of the children were found to be stuttering again, after at least two years of fluency.". Again this sounds very respectable, but I have actually read the article (unlike most therapists). It is a desaster. The MAJORITY of the kids could not be contacted anymore. Why? Or did someone not contact them because they stuttered? Moreover, 3 kids relapsed is 86% recovery rate, and considering the small sample you cannot even be sure you beat the natural recovery. OK you argue that the natural recovery is much lower, but please show it to me in the control group or achieve 90% in a sample of 100 kids! "Without this long-term follow up study, we would not have this important new knowledge about the nature of stuttering and about the need to work to further improve Lidcombe outcomes. " Again, this sounds really great but your study was so poorly implemented how can we trust your results. From 134 kids referred to treatment and 47 completing it, you are left with 28 kids! So where is this important new knowledge? How can you have new knowledge on such a poor sample? The need to further improve Lidcombe? Sounds like from a spinning doctor. THE TRIAL IS NOT SET UP TO PROVE LIDCOMBE IS EFFECTIVE, so how can you say you will improve it? (7) "This tells us that:" "1. For these children the initial improvement in stuttering was apparently due to the treatment, not natural recovery (2)" NO AGAIN THE TRIAL DOES NOT EXCLUDE PLACEBO OR NON-LIDCOMBE EFFECTS. Moreover, you could even argue that those you would have recovered anyway just recovered faster in the 9 months because they have the inherent ability anyway. And we know from adult therapy, that nearly everything works for some time. Not speak about getting used to clinic environment. To summarise, I am just fed up with sloppy pseudo-scientific replies that 99% of the clinicians and stuttering community swallow happily because no-one actually sits down and looks at the trial carefully. Or she or he would find that it is a can of worms. But let me conclude by saying that at least you try to do evidence-based research. So the fact that I can criticise your research is progress in itself for I cannot criticise other approaches because they do not do any outcome research.
Monday, October 20, 2008
Hollins program
StutterTalk.com did an interview with Webster from the Hollins institute. A very large private clinic for the treatment of stuttering and they have the domain name stuttering.org! I have never looked at them very closely. I will listen to the interview and report back. Make up your own mind here.
Self-report on Abilify
A reader has sent me his self-report on the first four weeks on Abilify. I hear Abilify mentioned often. For example, Ludo Max told me that they had some very positive experience with Abilify, but they never got the money for more research, I believe. Again, we need to be careful to its real efficacy. Here is the 4-week report:
It has been 4 weeks now since I started taking Abilify and so far so good.I am 28 years old and have stuttered all my life. This is the first time I am taking medication for my stutter. I started initially on 2 mg and increased to 5 mg after 2 weeks.All secondaries like eye blinking and face contortions have gone and I feel myself a lot more in control of my speech. For the first time in my life, I have started to make eye contacts while talking.I have gone from a severe stutterer just a one month back to a mild one now.
I have started to talk a lot more at work and with my friends and family. I am not expereincing that much rush of blood and anxiety before talking.
Abilify is not a cure and I still have blocks and I still stutter. But the blocks frequency and duration have reduced substantially and I seem to apply easy onset techniques to get out of blocks more easily and frequently now than before.
I haven't had any adverse side effects so far. I haven't gained any weight, though I have been also watching my diet and working out and also I haven't experienced dizziness or restlessness. I hope that the positive effects of Abilify on my speech don't wear off after some time.
Thursday, October 16, 2008
The tell-signs of flawed research
Back to John Ioannidis's work (see my post) and his tell-signs of flawed research. Let's see how much is true in stuttering research.
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
Absolutely, most samples are less than 30. He recommends 1000s!
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
Most research does not include the effect size, but only look at statistically significant differences. If they did, they would find small effect sizes.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
Yes, most studies look at many different variables making it more likely that some correlate by chance.
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
Stuttering is very difficult to quantify unlike weight for example. It is a moving target, because people who stutter can fluctuate dramatically. Compare this to a weight measurement where the difference in weight between morning and evening is probably just a kg or so.
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
There are great financial incentives for stuttering medication: a potential market worth 100s of million dollars. There are financial incentives in AAF (altered auditory feedback) devices, for example SpeakEasy devices. They are pushing very hard on spinning the evidence. Just have a look at their website. Regarding conventional treatments including Lidcombe, there is some money to be made but the sums involved are peanuts in comparison. It is more a matter of justifying their existence as researchers and clinicians (giving their life a meaning and purpose) than about pure financial gains.
There are certainly prejudices. This is especially true for the clinicians who test their own treatment, or you try to validate their long-held beliefs. Compare this to a geneticist who studies the genetics of stuttering (he has no prejudice on what he wants to confirm). He has no hypothesis.
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
The two hot areas I see are: Lidcombe treatment and emotionality/sensitivity studies... This is very different to the hot area of brain imaging or genetics where it is hot to do research but no-one knows what should be found!
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
Absolutely, most samples are less than 30. He recommends 1000s!
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
Most research does not include the effect size, but only look at statistically significant differences. If they did, they would find small effect sizes.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
Yes, most studies look at many different variables making it more likely that some correlate by chance.
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
Stuttering is very difficult to quantify unlike weight for example. It is a moving target, because people who stutter can fluctuate dramatically. Compare this to a weight measurement where the difference in weight between morning and evening is probably just a kg or so.
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
There are great financial incentives for stuttering medication: a potential market worth 100s of million dollars. There are financial incentives in AAF (altered auditory feedback) devices, for example SpeakEasy devices. They are pushing very hard on spinning the evidence. Just have a look at their website. Regarding conventional treatments including Lidcombe, there is some money to be made but the sums involved are peanuts in comparison. It is more a matter of justifying their existence as researchers and clinicians (giving their life a meaning and purpose) than about pure financial gains.
There are certainly prejudices. This is especially true for the clinicians who test their own treatment, or you try to validate their long-held beliefs. Compare this to a geneticist who studies the genetics of stuttering (he has no prejudice on what he wants to confirm). He has no hypothesis.
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
The two hot areas I see are: Lidcombe treatment and emotionality/sensitivity studies... This is very different to the hot area of brain imaging or genetics where it is hot to do research but no-one knows what should be found!
Sunday, October 12, 2008
If you want to do a post-doc in stuttering
Ludo Max is looking for a post-doc: check here:
I am sceptical he will find someone who can code well in Matlab and C, and at the same time knows something about stuttering. On the other hand, if the post-doc does not know anything about stuttering, he or she will be less biased and Ludo has the expertise anyway.
The Laboratory for Speech Physiology and Motor Control in the Department of Communication Sciences at the University of Connecticut (Project P.I.: Ludo Max, Ph.D.) is seeking applications for a postdoctoral position to study various aspects of the neural systems underlying sensorimotor control of speech movements in individuals who stutter. This NIH-funded project involves both psychophysical and neuroimaging (fMRI) experiments, and the selected candidate will have opportunities to contribute to both lines of work. Facilities in the lab include, among other things, electromagnetic motion tracking for speech articulatory movements as well as for upper limb movements, real-time digital signal processors for auditory perturbations of speech and a Phantom 1.0 robot for mechanical perturbations of the jaw, tendon/muscle vibration, EEG/EP systems, and a virtual display environment for arm motor learning studies.
Candidates with a Ph.D. degree in cognitive/behavioral neuroscience, motor control, biomedical engineering, speech and hearing science, experimental psychology, and related fields are encouraged to apply. Good programming skills (Matlab and C++) are preferred. Candidates should be highly motivated and have an interest in publishing research in the area of speech motor control and stuttering.
I am sceptical he will find someone who can code well in Matlab and C, and at the same time knows something about stuttering. On the other hand, if the post-doc does not know anything about stuttering, he or she will be less biased and Ludo has the expertise anyway.
Another flawed Lidcombe study
And yet another flawed study on Lidcombe. This is especially disappointing because the group is independent of the Australian group around Mark Onslow. The co-author is Barry Guitar, university professor and the author of a well-known (and in general well written) text book on stuttering called
Stuttering: An Integrated Approach to Its Nature and Treatment . However, his research and his presentations on temperament and stuttering are suspicious to me. He might be a good clinician but a not very good science mind unfortunately. I would guess the first author is an enthusiastic and bright graduate student who has effectively wasted her or his time with research that has little relevance.
Without having read the article itself, I can see several flaws:
1) A sample size of 15 is much too small. Either you do at least 100 or you do not do it at all! The number is especially high because of the natural recovery rate increasing statistical fluctuations.
2) They have not controlled for natural recovery rate. It makes the results appear much more positive that they really are, because some will recover within one year naturally anyway and the stuttering severity will automatically go down. Here is a simple example. I have 20 kids. Let's assume 10 would have recovered within one year without treatment. Before, they all stutter at 5%. After one year without treatment, only 10 stutter, and the average stuttering rate is 2.5% (10 kids at 0 and 10 kids at 5%).
3) They try to find correlations in data with a very small sample size. Their finding on handedness is most likely a fluke.
But interestingly, the abstract seems to suggest that some kids are still dysfluent (I would have to read the article which costs money to access). The existence of dysfluent kids is not affected by statistics. So we can say that Lidcombe is not the cure it was claimed. In fact, I heard from many other therapists that some kids do not become fluent.
Stuttering: An Integrated Approach to Its Nature and Treatment . However, his research and his presentations on temperament and stuttering are suspicious to me. He might be a good clinician but a not very good science mind unfortunately. I would guess the first author is an enthusiastic and bright graduate student who has effectively wasted her or his time with research that has little relevance.
Am J Speech Lang Pathol. 2008 Oct 9.
Long-Term Outcome of the Lidcombe Program for Early Stuttering Intervention.
University of Vermont.PURPOSE: To report long-term outcomes of the first 15 preschool children treated with the Lidcombe Program by speech-language pathologists (SLPs) who were inexperienced with the program and independent of the program developers. Research questions were: Would the treatment have a similar outcome with inexperienced SLPs compared to outcomes when implemented by the developers? Is treatment duration associated with pre-treatment measures? Is long-term treatment outcome affected by variables associated with natural recovery? METHOD: Fifteen preschool children who completed the Lidcombe Program were assessed prior to treatment and at least 12 months following treatment. Pre-treatment data were obtained from archived files; follow-up data were obtained from interviews and recordings completed after the study had been planned. RESULTS: Measures of stuttering indicated significant changes from pre-treatment to follow-up in percent syllables stuttered (%SS) and Stuttering Severity Instrument-3 (SSI-3) scores. Pre-treatment severity was significantly correlated with treatment time. Handedness was the only client characteristic that appeared to be related long-term treatment outcome. CONCLUSIONS: The treatment produced significant long-term changes in children's speech, even when administered by SLPs newly-trained in the Lidcombe Program. Treatment results appear to be influenced by pre-treatment stuttering severity.
Without having read the article itself, I can see several flaws:
1) A sample size of 15 is much too small. Either you do at least 100 or you do not do it at all! The number is especially high because of the natural recovery rate increasing statistical fluctuations.
2) They have not controlled for natural recovery rate. It makes the results appear much more positive that they really are, because some will recover within one year naturally anyway and the stuttering severity will automatically go down. Here is a simple example. I have 20 kids. Let's assume 10 would have recovered within one year without treatment. Before, they all stutter at 5%. After one year without treatment, only 10 stutter, and the average stuttering rate is 2.5% (10 kids at 0 and 10 kids at 5%).
3) They try to find correlations in data with a very small sample size. Their finding on handedness is most likely a fluke.
But interestingly, the abstract seems to suggest that some kids are still dysfluent (I would have to read the article which costs money to access). The existence of dysfluent kids is not affected by statistics. So we can say that Lidcombe is not the cure it was claimed. In fact, I heard from many other therapists that some kids do not become fluent.
Thursday, October 09, 2008
Why Most Published Research Findings Are False
John Ioannidis is saying exactly in this article what I have always believed that most published medical research is wrong. Stuttering is no except to the rule.
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.He should write a second article on what happens when others are pointing out issues in research. I tell you what happens when I point out issues: nothing, absolutely nothing. It is still being spread, and the only antitode is to shout as loud as possible: it's false. And of course I need to face up that people start to think of me as an eccentric outside who are no clue, really.
Monday, October 06, 2008
Correction on Indevus share price jump
I just realised that my post is probably not telling the whole story about Indevus' share price jump, because on the same day, they released information on another compound which this article by Anuradha Ramanathan claims has provoked the jump:
Shares of Indevus Pharmaceuticals more than doubled after the company reached a deal with U.S. health regulators to use its existing data for an early re-application seeking marketing approval for its testosterone replacement drug.
The agreement with the U.S. Food and Drug Administration removes the need for more studies and Indevus now plans to apply again for marketing approval in the first quarter of 2009, and launch the drug in the fourth quarter, the company said in a statement.
Saturday, October 04, 2008
Half brain
I just saw a documentary on people who only have one half of their cortex (the high-level brain). When they spoke in front of the camera, they seem to have markable disfluencies. They did not stutter but they looked a bit like "recovered" stutterer who control themselves not to stutter but you perceive their mini-block: instead of getting out of control, they just move to the next word after a slight hesitation. Not sure whether it has a connection to stuttering.
Friday, October 03, 2008
Indevus shares jump by 50%
The shares of Indevus, the licence owner of Pagoclone, jumped by 50% in light of the announcement of a new partner Teva and the Phase IIb trial. The graph is from Yahoo!Finance. It is a good example that you need to know both timing and impact of information to make money. I knew that the stock would jump upon such an announcement, but it did not know the timing or whether it would happen. Actually, I am an idiot. Come to think about it, I kind of knew, because in June I was told by reliable sources that there was a meeting for a new trial and that they would go ahead. I should have bought the stock and made a killing. Oh well...
Think about it, 50% higher means that the company is worth double from one day to the other. Why? Because investors believe that the future cashflows of the company are twice as high (neglecting discounting)! And it is twice as high, because 2-3 people at Teva think it is a risk worth taking. How much do they know about stuttering? Probably, very little. And I am sure they are not aware of many methodological pitfalls. But, the market trusts their judgement for the moment and so the value of Indevus goes up by 50%. I think the chances of clear success are moderate, but I also have not see the individual responsiveness of each patient.
Now everyone at Indevus who gets paid in shares for bonuses is worth twice the amount! Everyone will do everything to get Pagoclone approved. It could make or break for them as millionaires! Just imagine the other pharmaceutical companies looking at Indevus. Surely they will brainstorm on how to jump the band waggon. If any of you are such a company, you can hire me as a consultant! ;-)
Here is an Associated Press article, and here an extract:
Think about it, 50% higher means that the company is worth double from one day to the other. Why? Because investors believe that the future cashflows of the company are twice as high (neglecting discounting)! And it is twice as high, because 2-3 people at Teva think it is a risk worth taking. How much do they know about stuttering? Probably, very little. And I am sure they are not aware of many methodological pitfalls. But, the market trusts their judgement for the moment and so the value of Indevus goes up by 50%. I think the chances of clear success are moderate, but I also have not see the individual responsiveness of each patient.
Now everyone at Indevus who gets paid in shares for bonuses is worth twice the amount! Everyone will do everything to get Pagoclone approved. It could make or break for them as millionaires! Just imagine the other pharmaceutical companies looking at Indevus. Surely they will brainstorm on how to jump the band waggon. If any of you are such a company, you can hire me as a consultant! ;-)
Here is an Associated Press article, and here an extract:
Shares of Indevus Pharmaceuticals Inc. more than doubled Friday after the company said it is collaborating with Teva Pharmaceutical Industries Ltd. to develop a treatment for stuttering and could move forward with its delayed horomonal disorder drug.
Indevus' stock surged $1.78, more than doubling to close at $3.51. The Lexington, Mass.-based company's stock has traded between $1.19 and $8.22 over the past 52 weeks. Shares of Israel-based Teva, which makes both generic and branded drugs, rose 31 cents to $46.03.
Thursday, October 02, 2008
100'000 visitors!!!!!!!!!!
I completely missed the 100'000 visitor celebration! Also the average number of visitors per day is 230! I have roughly 1000 unique visitors per week from all around the world; most are from the US.
Lidcombe treatment of choice?
Susan Block replied to my comments on her statement that "Lidcombe should be the treatment of choice" (see her ISAD article here):
And I replied:
I think we have discussed some of your comments before. It is the case, in my opinion that it has the best evidence to date for preschool children. The Franken et al study was almost impossible to replicate as their comparative treatment was not well defined. I think it is the case in the Jones et al study that the children in the Lidcombe treatment group made so much positive change that they could not justify maintaining children in a control group.
And I replied:
It is reasonable for you to say "in my opinion that it has the best evidence to date for preschool children". However. First, you actually wrote "should be the treatment of choice" implying a moral imperative i.e. it would be irresponsible for therapists not to use Lidcombe? Is it? Second, I repeat again that the trial described in Jones et al 2005 has a follow-up study Jones et al 2007 which you do not cite but in my opinion should. As I am sure you teach to your students, a treatment should be evaluated based on long-term outcome data and not short-term sucess. And the Jones et al 2007 paints a much more sober picture (apart from methodological issues). Have you read it? Regarding your comment "The Franken et al study was almost impossible to replicate as their comparitatve tretment was not well defined.", it is not relevant whether the trial is replicable or not for it to be true or not. In fact assuming the comparative treatment was completely ill-defined and chaotic, it managed to do as well as Lidcombe. This actually supports the alternative view that any treatment will be succesful. In any case, a new trial with a larger sample and extra care of defining the comparative treatment is under way. Regarding replicability, the same is true for the Lidcombe trial, because as you yourself say "it is the case in the Jones et al study that the children in the Lidcombe treatment group made so much positive change that they could not justify maintaining children in a control group.". We can never repeat it again with a control group! Would you therefore argue that it is not valid?
Wednesday, October 01, 2008
ISAD 2008
Check out Judith Kuster's online conference. With many good quality and no quality discussions on stuttering, research and treatment.
I have already posted a comment to Susan Block's article What clinicians should know to avoid the spreading of that myth Lidcombe should be used:
You are writing that "Current research indicates that the Lidcombe Program should be the treatment of choice for young children who stutter (Jones et al, 2005; Lincoln & Onslow, 1997)." This is misleading 1) It implies that Lidcombe is better than other early interventions. However, Lidcombe has never been tested against other forms of early intervention in a random control trial. It could well be that ANY intervention has a similar (or no) effect. There was only one pilot trial by Francken in the Netherlands and there was no difference with demands and capacity. She is currently conducting a large scale study between Lidcombe and DC. 2) You only cite the Jones 2005 article, but there is a follow-up paper from this year with long-term outcome. Three children have relapsed and many kids were not contactable any more. The sample is close to the natural recovery rate, not to speak 3. The study of Jones 2005 is questionable and has statistical and methodological flaws: wrong statistics, no long-term control group, and more. I think clinicians should know these facts. Evidence based practise is important but should be based on WELL ESTABLISHED evidence.
Subscribe to:
Posts (Atom)