Monday, April 21, 2008

Solutions to Logical Fallacy IV-VI



Here are the last three fallacies.

IV: Statistical Fallacy

A researcher says: "We have tested 100 kids who stutter on 20 variables and found 2 that were significant (p<5%). Therefore, I conclude that onset of stuttering is correlated to these two variables."

Explanation:
The researcher has committed the mistake of over-fitting the statistical model, because the more variables you look at the more likely one correlates by chance! And if he uses p<5% as a threshold, one in 20 variables will [Correction: is likely to] show a correlation by chance. Moreover, he has only looked at p-value, but effect size is absolutely crucial. The p-value only says whether two distributions are from the same underlying distribution, and any outlier or systematic error can easily lead to low p-values. Therefore, he needs to look at the size of the difference as well.

V: Treatment-Sucess-Factor-is-Cause Fallacy

A person who stutter says: "After I have come to terms with my childhood trauma and undertook psychological treatment, my stuttering vastly reduced. Therefore, stuttering must be caused by a traumatic experience in childhood."

It is true that his psychological treatment has helped him greatly. However, the cause might still be of non-psychological nature that triggered psychological issues that affected his stuttering severity. If you have a car accident and are in shock, psychological therapy might help you to overcome it, but that does not mean it has causes the accident.

VI: Timing fallacy

A researcher says: "We have recently launched a large-scale study to study kids around the onset of stuttering, because we want to find out what causes stuttering."

The researcher assumes that the causes of stuttering lies close to the onset of stuttering, but that could well not be the case. For example, genetics clearly played a role in many cases of stuttering, and the genes were there from the beginning onwards. Or a neurological incident in the womb or later like a virus infection or a head injury could be the cause, but is not close to the onset of stuttering.

8 comments:

Greg said...

I'm glad that you're raising awareness in these areas; it's clearly needed and long overdue. However, I would challenge the belief that "if he uses p<5% as a threshold, one in 20 variables will show a correlation by chance." To the very best of my knowledge, that belief just isn't accurate.

Tom Weidig said...

Hi Greg,

If I have p=5%, that means that I have a 5% chance of it being a random correlation. If I do the test on 20 variables, I have 20 times a 5% chance of getting a correlation by chance. And on average, I will have one of the 20 variables being correlated by chance?

Maybe my formulation is a bit vague, but that's the essence. The more variables you test, the more likely one is correlated by chance.

Greg said...

Common intuitive sense would support your position... but reality often times doesn't work that way. Your position is a gross overstatement of what a p-value represents, and it's an over-extension of the concept.

If I pick up 20 widgets, and use an alpha of .05--that does not mean (by any stretch of the imagination) that because I have 20 measured variables, I am bound to hit .05 or less at least once. A properly designed statistical design just won't do that (by chance). (Further, Type I and Type III errors may also depend on what type of analysis that is being conducted.)

This is your site, so I won't continue on the point. I would simply suggest you read up on statistical interpretations. Keppel is a great place to start.

Tom Weidig said...

Hi Greg,

you can argue as long as you want on my site. Contrary to popular myths, I am receptive to arguments!

>> that does not mean (by any stretch of the imagination) that because I have 20 measured variables, I am bound to hit .05 or less at least once

I know that. I should have written "one in 20 variables is likely to show a correlation by chance" instead of "will show a correlation". In my reply, I said "on average it will show a correlation", which might also not be the best way of putting it.

What I really mean is that if I run a Monte Carlo simulation over all possible scenarios i.e. where I randomly generate 20 dependent random variables over and over again and do t-tests with the independent variable, I will get a probability distribution on the percentage of scenarios that show at least one variable having a p less than 5% by chance. The mean of this distribution is quite high, but I don't know exactly how high but more that 50% I would guess. I would have to run the simulations or do some calculations probably using the permutation formula.

It is a real issue, as people use the Bonferroni correction to adjust for the effect I am describing. http://en.wikipedia.org/wiki/Bonferroni

Actually, reducing to 1% significance level and using effect size will also go a long way.

I am sure there are some further arguments, and some more subtle that I do not know of.

However, all I wanted to say is that THE MORE YOU LOOK FOR A CORRELATION THE MORE LIKELY YOU FIND ONE BY CHANCE. And this is not only true within one experiment but also across a whole field because of the publication bias.

Anonymous said...

Tom - you wrote If I have p=5%, that means that I have a 5% chance of it being a random correlation. If I do the test on 20 variables, I have 20 times a 5% chance of getting a correlation by chance. And on average, I will have one of the 20 variables being correlated by chance?

You seem to be starting from the assumption that:

Prob(A or B) = Prob(A) + Prob(B).

I'm rusty on statistics, but I don't think that calculation is right. My understanding is that the calculation should be:

Prob(A or B)
= 1 - Prob( (not A) and (not B) )
= 1 - [1 - Prob(A)]*[1-Prob(B)]

(Deriving from De Morgan's Law in propositional logic, as I recall.)

Your example has 20 variables P1...P20, each with probability 5%, and you seem to be implying that the probability that any one of them is true is 20*(.05) = 1, which is certainty - which can't be right.

The proper calculation, I think, is:
Prob(P1 or ... or P20)
= 1 - [1-(.05)]*[1-(.05)]*...*[1-(.05)]
= 1 - .95 ^ 20 = 1 - .36
= 64%.

Of course, I don't disagree with your underlying point: THE MORE YOU LOOK FOR A CORRELATION THE MORE LIKELY YOU FIND ONE BY CHANCE.

Tom Weidig said...

Hi Ora,

I agree with you. And as I said in my last reply, I should have put it differently. I know that 20 times 5% probability is not 100%, I wanted to write "It is not 100%, but certainly quite high but I have no time to think about how to compute it."

And you gave the correct way of computing it analytically. It is also possible to run a Monte Carlo simulation which works for any problem, as I suggested.

Best wishes,
Tom

Greg said...

I guess my only reflection from all this is that it seems that you're throwing out the baby with the bathwater. (Sorry--that's an American idiom.) Just because some poor scientists can create poor statistical designs and execute "research" that is otherwise meaningless and uninterpretable does not mean that all "science" and "research" is that way.

Perhaps a better way of saying it is that it's difficult for me to assume your position, because it is so easily avoided with a simple understanding of statistics and the scientific method. To assume the absolute worst in all people looking into this antithetical evil paradox which is stuttering is a bit of an overgeneralization.

Tom Weidig said...

Dear Greg,

I think you have misunderstood my stand-point.

I did the list of fallacies to raise awareness. In my experience, the majority of therapists and non-full time researchers commit the statistical fallacy time and time again. The sentences are just artificial examples to illustrate the fallacies and do not reflect my view on the whole research community.

Nowhere do I disagree that "Just because some poor scientists can create poor statistical designs and execute "research" that is otherwise meaningless and uninterpretable does not mean that all "science" and "research" is that way". Actually, I completely argue with your statement. However, I have great difficulty to distinguish between good and bad researchers. I can do it but only by reading the articles carefully. I usually spot the bad ones. My favourite tell-sign is 3D representation of 2D data! :-)

>> To assume the absolute worst in all people looking into this antithetical evil paradox which is stuttering is a bit of an overgeneralization.

I always look for the worst, not because I expect it all the time but because for me it is much more important that there are no false positive than anything else. I rather ignore 100 positives than accept one false one. How is it possible that we have changed our views on stuttering many times? Because research was done sloppily with many false positive and accepting theories without empirical validation.

I do not say that they do it on purpose. They are just not aware of it. I am not interested in people but in the line of arguments.

Best wishes,
Tom