STAT3012 Lecture Notes - Lecture 16: Analysis Of Variance, Studentized Range, Multiple Comparisons Problem

19 views14 pages
Lecture 16 – Multiple comparisons
New concepts
Multiple comparisons
Tukey, Scheff´e and Bonferroni corrections
Data snooping
Applied Linear Models: Lecture 16 1
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 14 pages and 3 million more documents.

Already have an account? Log in
New topic – Multiple comparisons
Theory – Keeping the αerror
A CI has random endpoints that change as we go from data set to data set.
If we are making a large number of comparisons we often want to control the
overall confidence level α.
To do this we need to make the t-intervals a little wider.
We will consider three methods: Tukey,Scheff´e, and Bonferroni corrections.
Example – Three CIs
Assume you calculate three 95% CI’s.
Then for each one in 5% of all cases the (random) interval will not cover the
corresponding parameter (or contrast).
The chance that at least one interval does not cover the corresponding contrast
(parameter) may be as high as 3×0.05 = 0.15. Why?
Applied Linear Models: Lecture 16 2
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 14 pages and 3 million more documents.

Already have an account? Log in
Theory – Data snooping
Hypothesis depends on the sample data, i.e. after having inspected results from
statistical tests, plots, tables etc.
It can be tempting to test difference between largest and smallest treatment
effect only, i.e. for
w:= max
1it(Yi)min
1it(Yi).
Applied Linear Models: Lecture 16 3
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 14 pages and 3 million more documents.

Already have an account? Log in

Document Summary

A ci has random endpoints that change as we go from data set to data set. If we are making a large number of comparisons we often want to control the overall con dence level . To do this we need to make the t-intervals a little wider. We will consider three methods: tukey, sche e, and bonferroni corrections. Then for each one in 5% of all cases the (random) interval will not cover the corresponding parameter (or contrast). The chance that at least one interval does not cover the corresponding contrast (parameter) may be as high as 3 0. 05 = 0. 15. Hypothesis depends on the sample data, i. e. after having inspected results from statistical tests, plots, tables etc. It can be tempting to test di erence between largest and smallest treatment e ect only, i. e. for w := max. Example fertilizers: group means and -estimates tapply(y, x, mean) M1

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers

Related textbook solutions

Related Documents

Related Questions