SPED102 Lecture Notes - Lecture 7: Comorbidity, Cold Fusion, False Positives And False Negatives

45 views3 pages
SPED102 Lecture
Week 7
VII: Use and Misuse of Statistics
Overview
Multiple comparisons fallacy and the sharpshooter fallacy: research applications
Other strategies to bias your research
Some tricks to misrepresent your presentation of data
Biasing results without (outright) fraud
For those who are amoral and unethical the question arises
How can we do research in such a way to maximise a desired and predetermined
outcome without overt fraud (i.e. Fabricating data)
Revision: the (not so) magic p value
The p value reported in research gives an approximation the probability that you have false
positive result (compared to a true negative)
More damn statistics!
This is based on the assumption that you only do ONE comparison (test)
If you do multiple comparisons, the chance of getting a false positive result is above the
allowable 5%
The multiple comparisons fallacy
Many sneaky (or incompetent) ways to do lots of comparisons (and hide them)
Texas Sharpshooter Fallacy
Choosing your outcomes after the experiment
I have predicted the all lotto numbers with 100% accuracy for 6 consecutive weeks… what
are the chances?
Don't believe me?
Multiple Comparisons and Sharpshooting Applied to Research
Multiple outcomes
Switching outcomes after the research
Subgroup analysis
Open-ended criteria
Arbitrary termination of studies
Selective publication
Multiple Outcomes
Have lots of outcomes (dependent variables)
Some of them are likely to be significant by chance
Humphries et al (1992)
Compared Sensory Integration Therapy to
Comparison intervention (perceptual-motor program)
No treatment
How to Handle Multiple Outcomes
Other outcomes are secondary
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows page 1 of the document.
Unlock all 3 pages and 3 million more documents.

Already have an account? Log in

Document Summary

Overview: multiple comparisons fallacy and the sharpshooter fallacy: research applications, other strategies to bias your research, some tricks to misrepresent your presentation of data. Biasing results without (outright) fraud: for those who are amoral and unethical the question arises, how can we do research in such a way to maximise a desired and predetermined outcome without overt fraud (i. e. fabricating data) Revision: the (not so) magic p value: the p value reported in research gives an approximation the probability that you have false positive result (compared to a true negative) More damn statistics: this is based on the assumption that you only do one comparison (test) If you do multiple comparisons, the chance of getting a false positive result is above the allowable 5% The multiple comparisons fallacy: many sneaky (or incompetent) ways to do lots of comparisons (and hide them) Texas sharpshooter fallacy: choosing your outcomes after the experiment.

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers

Related Documents