In physics, we insist to include the measurement error into all our calculations. For example, lets drop a ball from five meters above the ground and measure the time it takes to hit the ground. Our time measurement will include an error, because we stop the clock ourselves. So the error might be of the order of 1/10 of a second. Let me repeat the measurement a few times in my head: 0.99s, 1.00s, 1.02s, 0.99s and so on. We can also measure the time with a laser, and the error might just be of the order of 1/100 of a second. But still you always have a measurement error. If we calculate the average speed of the ball, i.e. 5 meters divided by time needed, it will include an error because we need to include the time (and its error) in the computation. This procedure is very very important.
Is it done in stuttering research? NO. Let me repeat it again: NO RESEARCH HAS EVER INCLUDED MEASUREMENT ERROR!. What they should do is the following: For each measurement, they need to estimate the measurement error, and include in all the calculations like statistical tests the error range from the measured values. For example, if I measure the stuttering syllables a few times, I will get different values, e.g. 15%, 10%, 20%, and then I must conclude that the value is around 15% with an error range of 5%. And I need to carry this error range with me if I calculate statistical significance for example.
I discussed this issue once with Per, I think, and he said it is not done and anyway it is not physics and too much work. But, actually, because it is not physics, where measurements are much cleaner, quantifiable, and the error low, the argument to use measurement error analysis is even much greater! I think very few people are even aware of measurement error analysis.
If they had included it, I am convinced (and I bet my life) that ALL MARGINALLY SIGNIFICANT RESULTS WOULD GO AWAY, swamped away by the measurement error. If I am going to be cynical, I say that only those researchers who do not do measurement error analysis will be able to publish, and those who did perished because they have nothing to report on most of the time!
2 comments:
You might like to look at this from the Freakonomics blog:
http://freakonomics.blogs.nytimes.com/2008/02/05/whats-the-probability-that-romney-is-leading-in-california-a-guest-post/
It's about error in political polls, but same problem.
Dave - Thanks for that link. I always get annoyed when I read news articles saying two candidates are in a "statistical dead-heat", implying that they are tied or that the difference is trivial. The fact that a polling result is within the margin of error does not at all suggest that a race is is dead even.
Post a Comment