Question Description
Consider a probability model P(X, Y, Z, E), where Z is a single query variable and evidence E = e is given. A basic Monte Carlo algorithm generates N samples (ideally) from P(X, Y, Z | E = e) and estimates the query probability P(Z = z | E = e) from those samples. This gives an unbiased estimate but the variance may be quite large. The basic idea of RaoBlackwellization in this context is to generate N samples of, say, (X, Z) and, for each sample xj , zj , to perform exact inference for P(Y | xj , zj , e). Explain how this yields an estimate for the query P(Z = z | E = e) and show that the variance of the estimate is no larger than that from the original non-Rao-Blackwellized procedure.
Consider a probability model P(X, Y, Z, E), where Z is a single query variable and evidence E = e is given. A basic Monte Carlo algorithm generates N samples (ideally) from P(X, Y, Z | E = e) and estimates the query probability P(Z = z | E = e) from those samples. This gives an unbiased estimate but the variance may be quite large. The basic idea of RaoBlackwellization in this context is to generate N samples of, say, (X, Z) and, for each sample xj , zj , to perform exact inference for P(Y | xj , zj , e). Explain how this yields an estimate for the query P(Z = z | E = e) and show that the variance of the estimate is no larger than that from the original non-Rao-Blackwellized procedure.
For unlimited access to Homework Help, a Homework+ subscription is required.