Beyond the familiar problems we've discussed - such as publication bias, the tendency to publish results which show something, anything, rather than nothing - we have the confounding effect of bad data.
Take a look at this NY Times article. 20 percent of the data is just wrong?!
4 comments:
It may not be that data is wrong, it may be our expectation based on our understanding of data.
We don't really understand what data says and maybe the problem is that we don't understand or don't wan't to know that we don't understand.
Two Lessons from Fractals and Chaos
I have worked with System Verification and Integration for a Large Telecom company. Based on our observation and judgement we say OK or NOK to the test object.
Even when we knew what we are testing and wrote/know the specification people tend to mix observation and judgement.
If nothing else works people redo experiment hopping that they will get an observation that allows a judgement to be what they can accept.
For telecom we espect results around 99.99% and anything less than means someone is going to loose money.
80-90 % we have a fault in the product or in the measurement/data handling.
Results around 50% means we are receiving/measuring pure noise.
Below 50% don't exists.
How come you never mention about how the results of twin studies (not just in stuttering) get distorted? Is it because the distorted results agree with your particular world view?
@Anonym: Can you be more specific please?
That's incorrect.
Tom has acknowledged sources of error in twin studies. Now he is acknowledging sources of error in genetics studies. He has also acknowledged that results from both genetics studies and twin studies fit his world view and he has not mentioned sources of error in genetics studies until now.
Post a Comment