STA305H1 Lecture Notes - Lecture 7: Maximum Likelihood Estimation, The American Statistician, Likelihood Function

14 views9 pages

Document Summary

Given random variables (x1, , xn) with joint density or joint mass function f (x1, , xn; ), the maximum likelihood estimator (mle) (cid:2) is de ned to be any estimator that maximizes the likelihood function. L( ) = f (x1, , xn; ) over in the parameter space . 1 n ln(cid:4) f (xi; ) f (xi; 0)(cid:5) , n(cid:3)i=1 which is an approximation (by the weak law of large numbers for large n) of. K( ) = e 0(cid:6)ln(cid:7) f (x1; ) f (x1; )(cid:8)(cid:9) , where k( ) is maximized at = 0. However, many estimators can be justi ed using a substitution principle argument and so it is natural to wonder what is so special about maximum likelihood estimation. Maximum likelihood estimation is the original jackknife, in tukey"s sense of a widely applicable and dependable tool.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents