MAT 316 Lecture Notes - Lecture 7: Normal Distribution, Central Limit Theorem, Unimodality
Document Summary
Situation: you are performing bernoulli trials, the trials are all independent, p(success) = p remains constant for each trial. Note: this is the exact same situation as for the geometric distribution. The only difference between the geometric and negative. Geometric distribution is looking for one success, the negative. As was the case with the geometric, there are two different ways to define the variable. X = the number of the trial where the rth success occurs. Derive the pmf of the nb(r, p) distribution. Pmf: p(x) = pr(1 p)x r i{r, r+1, . Note: if x ~ geom(p), then x ~ nb(1, p). Note: the geometric (1) distribution is a special case of the negative. Any book that defines the geometric using version 1 will define the negative binomial using version. If the book uses the geometric (2) distribution, it will also use a different version of the negative binomial: